Olivia Mullooly, Partner in the Technology and Innovation Group, shares key takeaways on the EU AI Act and responsible AI use.
Olivia explains how obligations for general-purpose AI systems take effect this August, with roles shifting based on system changes. She stresses that governance is key, even beyond what the Act regulates.
Video Transcription
Olivia Mullooly
My name is Olivia Mullooly, and I'm a partner in the Technology and Innovation Group at Arthur Cox. One of the biggest misconceptions about the EU AI Act, as I see it, is companies getting to grips with the extent to which the AI Act applies to their use of AI systems in their businesses, and often believing that it applies to all of the AI systems that are in use. Actually, the AI Act governs AI systems, taking a risk-based approach—from minimal-risk AI systems all the way up to prohibited-risk AI systems. But actually, the most commonly used ones are the limited-risk AI systems, and they're the general-purpose AI systems. The general-purpose AI system obligations are in the EU AI Act, and they will become applicable from the second of August this year, requiring a range of obligations that companies should take into account. These vary based on whether you are a provider of an AI system or you are a deployer of an AI system. You can move from being a deployer to a provider if you make significant changes to the system. And that's a very important distinction for companies to be aware of.
Then from next year, the obligations relating to the use of high-risk AI systems—and these can become applicable in an employment monitoring context, education context, and some other contexts that companies may actually find themselves in—will become applicable from August 2026. And there are graduated timelines for compliance depending on the timing of when you put them into use in your business. However, even if you don't have an AI system that is subject to the EU AI Act, that's not to say that you shouldn't have governance in place in relation to the use of those systems, and roles and responsibilities that oversee the extent to which the AI systems are used, the purposes they're used for, identifying if they're suitable for the use, and monitoring any particular biases or issues that might arise in the context of the use of AI systems generally. Really what we're seeing is advising companies on, on the one hand, their obligations under the EU AI Act, but also being responsible in putting a governance and compliance programme in place where they're using AI systems for various roles within their businesses. For more information, please visit arthurcox.com/technologyandinnovation.
Originally published 28 Jul 2025
This article contains a general summary of developments and is not a complete or definitive statement of the law. Specific legal advice should be obtained where appropriate.