ARTICLE
20 May 2025

Workplace Strategies Watercooler 2025: The AI-Powered Workplace Of Today And Tomorrow (Podcast)

OD
Ogletree, Deakins, Nash, Smoak & Stewart

Contributor

Ogletree Deakins is a labor and employment law firm representing management in all types of employment-related legal matters. Ogletree Deakins has more than 850 attorneys located in 53 offices across the United States and in Europe, Canada, and Mexico. The firm represents a range of clients, from small businesses to Fortune 50 companies.
In this installment of our Workplace Strategies Watercooler 2025 podcast series, Jenn Betts (shareholder, Pittsburgh), Simon McMenemy (partner, London), and Danielle Ochs...
United States Privacy

1626742a.jpg

In this installment of our Workplace Strategies Watercooler 2025 podcast series, Jenn Betts (shareholder, Pittsburgh), Simon McMenemy (partner, London), and Danielle Ochs (shareholder, San Francisco) discuss the evolving landscape of artificial intelligence (AI) in the workplace and provide an update on the global regulatory frameworks governing AI use. Simon, who is co-chair of Ogletree's Cybersecurity and Privacy Practice Group, breaks down the four levels of risk and their associated regulations specified in the EU AI Act, which will take effect in August 2026, and the need for employers to prepare now for the Act's stringent regulations and steep penalties for noncompliance. Jenn and Danielle, who are co-chairs of the Technology Practice Group, discuss the Trump administration's focus on innovation with limited regulation, as well as the likelihood of state-level regulation.

Transcript

Announcer: Welcome to the Ogletree Deakins Podcast, where we provide listeners with brief discussions about important workplace legal issues. Our podcasts are for informational purposes only and should not be construed as legal advice. You can subscribe through your favorite podcast service. Please consider rating this podcast so we can get your feedback and improve our programs. Please enjoy the podcast.

Jenn Betts: Hi, everyone. I'm thrilled to be with you today. My name is Jenn Betts, and I am joined by Danielle Ochs and Simon McMenemy. We are here at Workplace Strategies 2025. Tomorrow, this group and a couple others are going to be presenting about the AI-powered workplace of today and tomorrow. So, we're going to spend a couple of minutes now talking about what we're going to be discussing in that program.
I'm going to start with you, Danielle. We hear about and interact with AI every day, all of us, no matter what we do for a living. But can you give us just a basic high-level framing definition of what we mean by AI when we throw around that term?

Danielle Ochs: Sure. Jenn. AI is artificial intelligence. And what we usually mean by it when we talk about it is AI that replaces some of the typical human functions like reading, writing, translating, that we typically use human intelligence to perform. So, that's a very high level of AI. If we drill down a little bit and talk about it in the context of labor and employment, we're really talking about the use of AI tools in the workforce.

Jenn Betts: Got it. In the Biden administration in the U.S., we were seeing a lot of focus from the federal government and different regulatory bodies on coming up with some kind of framework to guide employers' use of artificial intelligence. What have we seen in the first couple of months of the new Trump administration? And what do you think is going to happen over the next couple of years?

Danielle Ochs: Well, it's interesting. To sort of set the stage, I just have to take a step back to give you a little brief history of what had been happening.
Several years back, the EEOC and a couple of other agencies collaborated on a framework to introduce the concept of AI regulation in the labor and employment space. They talked about some basic principles and some regulatory objectives.
Fast-forward to about last year. Those agencies, as well as several other agencies, got into the mix and issued guidance on the role of AI in the workplace. So, we saw guidance out of the EEOC. We saw guidance out of the OFCCP, out of the DOL, out of the FTC, out of the NLRB in different formats. So, the thought process was that there would be a lot of regulatory activity in this space.
We also saw the White House, under Biden, issue what they call the blueprint, which set forth various principles for how to safely roll out AI, develop and use AI. Biden administration also issued an executive order regarding AI that also reflects some of those same principles around the safe use of AI.
In the first few weeks or months of his administration, Trump rolled out a new EO on AI. First, he rescinded Biden's EO, and his EO focuses more on innovation, and it stresses the importance of innovation and the need to kind of limit regulation in order to allow immigration to flourish. However, in the last 24 hours, and we're still evaluating this, so we will have more to say about it, a new executive order was just issued, and that executive order relates to the AI educational focus on American youth. I believe it's called Empowering American Youth through AI.
And one of the key provisions that jumped out at me was the importance of preparing America's workforce. That's a provision of the executive order. So, we'll be evaluating that a little bit more closely to see what exactly the new administration has in mind around AI in the workforce.

Jenn Betts: Yeah, it's interesting to watch how this has been playing out. I think the Biden administration was also focused on encouraging innovation, development of artificial intelligence technologies, but also concerned about the impacts on Americans and the workplace. And the Trump administration, from a federal perspective, seems to be focused on deregulating things like artificial intelligence and the use of the workplace. Do you think in the U.S., the states are going to fill that gap?

Danielle Ochs: Yeah, I think it's almost certain. We've seen over and over again that when regulation does not happen in all sorts of industries and areas at the federal level, states typically step in and regulate themselves. So, the concern there is always the multi-state environment, and it's very difficult, as employers know, to comply with multi-state regulations when there's not a lot of continuity or consistency in how things are being defined, the scope of regulation.
I do think, and we can talk about it more later, that there are some guiding principles; however, some of them that were set forth in that the Biden administration EO, that states seem to have adopted. So, there are some guiding principles, even though the details, which the devil's always in the details, are different state to state.

Jenn Betts: Let's talk about multi-state compliance. Let's also talk about international compliance and move from employment to privacy, which is another area, a big concern and focus when we're talking about artificial intelligence, Simon, we could do a series of podcasts about the EU AI Act and privacy considerations related to artificial intelligence. But from a high-level perspective, can you tell the audience what is the EU AI Act? And what should they know about it?

Simon McMenemy: Yeah, sure. So, the EU AI Act was passed in August of 2024 and comes into effect, most of the provisions come into effect in August of 2026. So, we're in a period now where we're kind of waiting for guidance from the executive branch of the European Union, which is the European Commission.
But yeah, it's really the first act or legal framework that people will be required to comply with. Danielle was just talking there about guidance given by the Biden administration. There's a lot of guidance around and has been for some time, I'm not sure how well-known all of it is. For example, the United Nations has got guidelines on the use of AI. The Organisation for Economic Development has got guidelines on that, which the US has signed up to, we in the UK have signed up to as well as the EU and other countries.
But this is the first time this is actually going to be put in and enforced, and enforced quite strictly, I think. Certainly, I think for the big players, it could be quite punitive because the penalties are anything between 7.5 million, or 1.5% of worldwide annual turnover for, sort of, minor administrative infractions, right up to 35 million euros or 7% of your worldwide turnover for serious infractions.
You mentioned privacy. I think those companies that have had to comply with GDPR, even though perhaps they're not necessarily headquartered or based in Europe, but they've had to do it anyway because they sell into Europe or they have other dealings with Europe that make that a necessity, they're going to find exactly the same thing with the EU AI Act. So, I think it is something that's going to be increasingly on the in-tray of C-suites as we approach August 2026. Much like GDPR, I think a lot of people left it to the last minute, but I think people are already talking about AI guidance and what they need to do as an organization to make it safe. And if they're doing that, they're probably already partly compliant with the EU AI Act.
And it's not hugely detailed in every respect. Although interestingly, and I'm sure we'll sort of come on to talk about it, as far as employment is concerned, that is seen as a sort of high-risk area where there are some sort of specific requirements.

Jenn Betts: Well, can you talk about that?

Simon McMenemy: Yeah, sure.

Jenn Betts: Because I think what is always interesting for our clients is learning from how these laws are developing so that they can build out their own kind of internal guardrails, even if it's not legally required in the U.S., if this is something that is being mandated in the EU and companies can figure out how to work through it. Maybe they decide as an option to do that in the U.S. So, what kinds of compliance steps are we talking about?

Simon McMenemy: Sure. So, just as a sort of very quick overview, the act sets out four levels of risk. So, there's a sort of minimal risk, which would be things like spam filters and the use of AI in computer games and stuff like that, that we've had for a very long time, doesn't really need any regulation. And that's not really covered by the act at all, they leave those things alone.
Then there's limited risk, which is things like chatbots or some chatbots. Although I think, increasingly, as that gets more sophisticated, that might move up into a higher risk.
So, then you've got high risk, and we've already mentioned employment specifically comes into that band, along with things like law enforcement, facial recognition, transportation, things like driverless cars and trains are seen as high risk quite rightly.
And then there's unacceptable risk, or what is now actually prohibited use of AI. And that's already enforced.

Jenn Betts: I was going to say, that's in effect already, right?

Simon McMenemy: Absolutely.

Jenn Betts: Yeah.

Simon McMenemy: So, that came into force on February 2nd this year, 2025, for new AI developers and AI already in use. I don't know the statistics, but I'd be really interested to know how many people have withdrawn AI that is now, strictly speaking, prohibited within the EU.
But in terms of, to go back to your question of what are the compliance requirements, I'd say if it's summed up in a word, it's probably transparency. Be transparent as an employer about how you're using AI in the workplace and how it affects your employees and workers.
So, the first thing you do is you inform them that, "This is powered by AI," or, "This uses AI in this respect," and be sort of specific about that. Also, I think make sure that the data that you're using is clean. And that's already a requirement under GDPR, that you don't have old, out-of-date personal data about people. And that's hugely important when you're then giving that personal data to AI to do something with.
And that there's human oversight. I think we've all seen over the years the use of AI in recruitment, for example. And having the human eye or the human intervention, so you're not relying on autonomous decision-making, which, going back to GDPR again, is already something you should not be doing. But I think that just reinforces that provision that's in GDPR already.
And then monitor the use. Obviously, with AI, it's something that, to some extent, probably using the wrong word, but kind of has a mind of its own. I think also the guidance already issued by the European Commission talks about AI being something, or high-risk AI being something that can infer. To infer something, you're not certain about it. It's slightly dangerous. So, to monitor that, so it doesn't, in a way, get too clever.
And then the requirement, just as with privacy laws, to log what you're doing and to retain that log for certain periods of time, anything up to actually 10 years. So, if you think about how people have already been using AI, they haven't necessarily been recording what they're doing, they haven't necessarily been logging it. So, I think that will be one of the biggest changes for people and administrative burdens.

Jenn Betts: Yeah. Just in our remaining time, I think it would be helpful to get some takeaways or best practices from each of you. Danielle, from a U.S. compliance perspective, any high-level best practices that you think our clients who are listening should be thinking about right now in 2025?

Danielle Ochs: Yeah, look at forming a governance team of some sort that is responsible for knowing what AI is being used in the workplace, deciding what sort of AI you want to be using and whether you should be using it, how it's operating, whether it's operating as intended, whether or not it's having impacts that you may want to avoid, and having a system for overseeing it with human oversight and/or auditing and authenticating the outcome to make sure that you're getting the results that you want and avoiding any particular pitfalls like bias or other outcomes that would not be helpful.
I think in addition to that, you want to make sure that you've got people focused in on procuring the AI responsibly. That means really paying attention to the agreements that you have with providers, protecting yourself with the right indemnity provisions, but also making sure that you've fully vetted the tools in the procurement process, so they do work the way that they're supposed to. And most vendors encourage that and will facilitate that.
Ultimately, we also think that whether regulation evolves in your jurisdiction or not, you want to self-regulate as an employer. So, we encourage policy development, which is part of what a governance team would do. But there are some basic components that we anticipate you'll want to think about. For example, notice, most of the rules and regulations out there seem to focus on giving folks notice that AI is in effect or being used.
Possibly consent, depending on the use. But that would involve the possibility that people could opt out from use. It certainly would be required in situations where, for example, people need accommodations that preclude them from using AI. You want to think about the data and what you're going to do with it, the data you're collecting, how you're going to use it, of course, how you're going to protect it, privacy, security, that sort of thing. And then you're going to want to think about auditing, validating results, and figuring out how to make sure things are operating the way that you want them to. Those should all be components in any policy development discussion internally.

Jenn Betts: I see a lot of parallels between the concepts that you were talking about, Danielle, and the concepts that you were talking about, Simon, from the EU AI Act. Any additional best practices or considerations that you think clients who are listening should be thinking about right now in 2025?

Simon McMenemy: Yeah, I wholly endorse what Danielle said there about self-regulating. Don't wait for August 2026 for most of the provisions of the EU AI Act to come into force. Start now because what suits you is going to be probably 90% there as far as the requirements of the new European legislation is concerned, but obviously it will need to comply with that. And it is a specific requirement, for those using high-risk AI, to have policies and procedures in place. And of course, as you say, they've got to then interact with your data privacy policies. They've got to interact with your data retention policies. But yeah, start now. Don't wait until August 2026.

Jenn Betts: Great. I want to thank you both for joining us for this podcast, and I want to encourage everybody to join us in the future.

Danielle Ochs: Thank you, Jenn.

Simon McMenemy: Thanks very much.

Announcer: Thank you for joining us on the Ogletree Deakins Podcast. You can subscribe to our podcast on Apple Podcasts or through your favorite podcast service. Please consider rating and reviewing so that we may continue to provide the content that covers your needs. And remember, the information in this podcast is for informational purposes only and is not to be construed as legal advice.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More