ARTICLE
7 May 2025

Privacy And Security In AI Note-Taking And Recording Tools, Part 2: Risk Mitigation And ADMT Regulations (Podcast)

OD
Ogletree, Deakins, Nash, Smoak & Stewart

Contributor

Ogletree Deakins is a labor and employment law firm representing management in all types of employment-related legal matters. Ogletree Deakins has more than 850 attorneys located in 53 offices across the United States and in Europe, Canada, and Mexico. The firm represents a range of clients, from small businesses to Fortune 50 companies.
In the second part of this two-part series, Ben Perry (shareholder, Nashville) and Lauren Watson (associate, Raleigh) discuss the use of artificial intelligence (AI)-powered note-taking and recording tools in the workplace.
United States Technology

1618228a.jpg

In the second part of this two-part series, Ben Perry (shareholder, Nashville) and Lauren Watson (associate, Raleigh) discuss the use of artificial intelligence (AI)-powered note-taking and recording tools in the workplace. Ben (who is co-chair of the firm's Cybersecurity and Privacy Practice Group) and Lauren discuss the various risks and considerations companies may need to address when using AI tools, particularly focusing on data security, employee training, and compliance with evolving legal regulations. They emphasize the importance of conducting due diligence, implementing strong security measures, and providing proper employee training to mitigate potential risks associated with these AI tools.

Transcript

Transcript

Announcer: Welcome to the Ogletree Deakins Podcast, where we provide listeners with brief discussions about important workplace legal issues. Our podcasts are for informational purposes only and should not be construed as legal advice. You can subscribe through your favorite podcast service. Please consider rating this podcast so we can get your feedback and improve our programs. Please enjoy the podcast.

Lauren Watson: This is Lauren Watson. I'm an associate in the Raleigh office of Ogletree Deakins, and I'm joined here by Ben Perry, who sits in our Nashville office and is the Co-Chair of the Cybersecurity and Privacy Practice Group. Welcome to part two of our two-part series on AI note-taking and recording tools.
Well, we've talked about all of the risks. What are some of the things that companies can do to deal with this? We've talked about due diligence a little bit, but I really want to harp on it. Make sure you understand how your information is going to be collected and used and stored by the particular vendor. Ask the hard questions, make sure you get answers and make sure that you and all of the relevant stakeholders at your company are comfortable with the answers that you get.
Another thing that's very important is making sure that the particular tool you're using has really strong security measures in place, and I know that when we've had discussions with clients, we've talked about things like having end-to-end encryption in place and then local storage options.
Is there anything else that you're regularly advising clients they should look for in terms of security measures?

Ben Perry: The security measures that would be appropriate for any particular situation are always going to vary. I mean, I think definitely having backups secure, especially if there is some sort of local storage option, but even then, things that you mentioned earlier like access controls, obviously we haven't mentioned multifactor authentication for account access, but to me that's kind of like a basic blocking and tackling these days is having MFA in place at least for initial access, even if you subsequently use single sign-on or something like that.
But yeah, I mean those are obviously very important considerations, and I'll say just in transit as well, just making sure employees know that when they're sending any sort of files like that, they should be sending it in an encrypted format if they're sending it externally outside of the company's network.

Lauren Watson: Yeah, absolutely, and I mean to make sure that employees know that you really do want to have good, strong policies in place to protect both the company and the company's data. So, in those policies, you want to have certain restrictions in place, things like only letting your employees use the AI tools that you've actually evaluated and approved. You might want to limit the types of conversations that employees are allowed to use AI note-taking or recording tools to record. Things like privileged conversations, you might want to think twice. You may consider putting into place strict guidelines about where the output can be stored.
We've been talking about access controls. We've been talking about particular locations on your system. Put that in a policy. Make sure that you have something you can point to when you tell your employees, this is where that information goes. And then Ben spoke about this earlier, but set expectations about quality control. Artificial intelligence tools can and do hallucinate, and you have to make sure that the output of any AI tool that you use is going to be subject to quality control measures.
For example, you're in a meeting and you allow the host of that meeting to record and transcribe the meeting so that they've got a perfect record of what was said. They shouldn't just transcribe it and leave it. That same meeting host should go back through and make sure that the output actually accurately reflects the conversation that was had. And this is particularly important if you're going to be using this to do things like make an employment decision about someone, or if you're going to do things like decide who gets to work on a particular project for a particular client. For a whole host of reasons, you don't want to be relying on hallucinated AI outputs to make those types of decisions. So, it's really, really important to QC everything that you get from an AI tool.

Ben Perry: I was just going to say, the other thing I kind of wanted to mention as you were talking about that just now is the retention piece. We kind of talked about how long you retain the recording itself. To an extent there may also be minimum retention requirements that would apply depending on the nature of the data or the meeting that's being recorded. So, I think that's something that you also need to keep in mind is, is this something where there's going to be a two or three-year minimum retention requirement? And if so, what is the protocol for securing that information? If it's not something that's necessarily going to be needed, but it might be something that you have to retain because of a legal obligation, what are you going to do with that? Where's it going to be stored? Are there any additional protections? Are you going to encrypt it at the file level? All those sorts of things.

Lauren Watson: Yeah, you're absolutely right. And you really do have to think through these things before you even start using the tool because it can be very hard to un-ring the bell. And once you have these policies in place, be training your employees before you let them use these AI tools, you've got to make sure that they understand exactly what they can and cannot do with AI. It'll help you so much in the long run if you've got something that you can point to show that yes, the employees have policies, they have something they can look to. We have trained them on this. That way when mistakes happen, you're able to say, look, we did all of these things to sort of prevent from happening.
It's not a one-and-done training, these tools are constantly evolving, and presumably you're going to continue as a business evaluating AI tools, adding new ones, getting rid of your current AI tools. It's just like any other business tool, things change. So, make sure that your employees are being sort of regularly refreshed on what they can and cannot do and the procedures and sort of ethical guidelines in place with respect to those tools that you do allow them to use.
Last thing is your external privacy policies. And when I say external, I just mean things that are intended to, if you're using these tools with respect to consumers, make sure that that's in your consumer-facing privacy policy. If you're going to be using this on employees or job applicants, make sure that you have that in your privacy policy so that when they do go and look at that, they can understand pretty quickly that these types of tools are going to be used to evaluate them.
And I think this is a good sort of segue into our last section, last thing we want to talk about here today, which is automated decision-making technologies. A number of state and local jurisdictions either have or are in the process of implementing restrictions on the use of automated decision making technologies.
So, for example, if we look at New York City, if you have a physical location in New York City, you need to know about local law 144, which says that if you're using an AI tool to either substantially assist or replace discretionary decision making for your employment decisions, you have to give people a very particularized type of notice that you're going to be using that tool. You also will need to conduct bias audit, so you're going to need to take steps to figure out does the particular algorithm that this AI tool is using have biases baked in that are resulting in discriminatory outputs? So, it can be kind of burdensome in that particular jurisdiction. And then-

Ben Perry: You better hope the results say that it's not biased because you have to publish it too, right?

Lauren Watson: Yeah. New York makes you publish it. Although I will say I've looked, and I feel like there aren't that many companies that are operating out of NYC that are actually publishing these. So, sorry-

Ben Perry: I was just going to say, I think a lot of companies just because the way the law is drafted are kind of doing mental gymnastics to take the position that the law doesn't apply, and there is I think some leeway in the way the scope of the law is drafted.

Lauren Watson: Yeah. No, I agree. And then at the state level, there are a number of states that either have passed laws that address these automated decision-making technologies, or they are sort of in the process of passing either laws or regulations that address these. Illinois has a new law coming into effect next January that is going to prohibit the use of automated tools that discriminate against protected classes. Like the New York law, it's going to require things like notice to applicants that AI is being used in connection with their employment decisions. We're not totally sure what the notice piece of that law is going to look like. Enforcement of the law takes place under the human rights law in Illinois. So, the Department of Human Rights is supposed to be coming out with rules that'll help us understand exactly what needs to go into that notice. It hasn't happened yet, but we are watching the space pretty closely.
In the meantime, though, there are things that you can be doing to sort of mitigate the risk that the AI tool you're using may discriminate against your employees or your applicants. So, things like conducting the bias audits that we were just talking about, things like putting in place a process for manual human review of the AI outputs of the things that your AI tool is saying, and maybe even an appeal process. If an employee does dispute the output of the AI tool, they can all help you mitigate the risk that you're going to violate this law by having some sort of discriminatory effect result from your use of an AI tool.
Ben, I know that you were just in California talking about all of these issues, especially with respect to California law. Do you mind filling us in on what's going on at the CPPA?

Ben Perry: Yeah, absolutely. So, the CCPA's automated decision-making regulations are in the formal rulemaking stage with the CPPA, the California Privacy Protection Agency. And I know that those regulations have been really controversial, so they just closed the comment period, and now we'll see if they propose any additional changes to those based on the feedback they received. And depending on how substantial those changes are, there'll either be a shorter or a longer subsequent comment period before those are finalized.
The one thing is that the compliance views may be very short on these, so it's coupled with some other provisions like cybersecurity requirements and other things along those lines, risk assessments, which are kind of tied to both cybersecurity and automated decision-making. But in terms of just the ADMT itself, probably two of the biggest takeaways I've seen are, one, in terms of whether an opt-out right needs to be provided, the appeal process is one of the exceptions that you can rely upon with some other kind of minor corollaries to that. But having an appeal process in place is one of the exceptions generally to providing an opt-out right in the employment context. So, having that appeal process will be the alternative to allowing everybody the right to opt out, which they probably would.
And then secondarily, a lot of these types of technologies, and really any employee monitoring in general is likely going to trigger a risk assessment under the CCPA, which means that there are all these granular issues you have to address in a risk assessment, like what you're collecting, the harms to individuals, how you've attempted to mitigate those risks. That in and of itself is not novel, that's a concept that exists in a lot of other state privacy laws. But what makes California's unique is that employers are going to have to submit their risk assessments to the California Privacy Protection Agency on a regular basis. And because the requirement is so broad, it's going to be obvious which companies are not doing that because most, if not all, companies will be subject to those requirements in some respect.
And so it'll give the agency a direct window into your monitoring methods, all sorts of other things that you might not want to just lay bare to the Privacy Agency and potentially the world, because then you get into discoverability issues. And you have to think really carefully, I think, about what goes in those assessments because while there is a provision in the CCPA saying that it's not intended to abridge any evidentiary privileges or anything like that, if this is a document you're providing to the agency, then there's a question as to whether it would be privileged in any sort of subsequent litigation or anything like that. So a lot of things to keep in mind as that process moves forward.

Lauren Watson: But there are statutory damages for violations of the CCPA, including those automated decision-making provisions once they're actually in place?

Ben Perry: Yeah. And you've also got two separate enforcement arms, one of which is kind of the self-funded mechanism where that's incentivized to bring enforcement actions and levy fines.

Lauren Watson: So, it seems like this is a space where we could see a lot of enforcement activity, so it will be something that's really important for companies to keep an eye on, keep an eye on the development of the regulations, and then once they've actually been finalized, take quick and effective steps to comply. Right?

Ben Perry: Absolutely.

Lauren Watson: Okay. Well, Ben, I really appreciate you joining me on the podcast here today. I think we've gone for long enough, so maybe we can talk a little bit more about automated decision-making technologies and some of the legal and ethical considerations in another podcast.

Announcer: Thank you for joining us on the Ogletree Deakins Podcast. You can subscribe to our podcasts on Apple Podcasts or through your favorite podcast service. Please consider rating and reviewing so that we may continue to provide the content that covers your needs. And remember, the information in this podcast is for informational purposes only and is not to be construed as legal advice.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More