This is part four of our examination of the European Union's new artificial intelligence law, the ("EU AI Act"). In part one, we introduced the scope of the EU AI Act and discussed what types of AI systems are outright banned. In part two, we discussed high risk AI systems. In part three, we looked at the requirements for general-purpose AI models. In this article, we examine low risk AI systems and overall enforcement.
Low Risk AI Systems
A low risk AI system is simply an AI system that is not a prohibited AI system or a high risk AI system. In contrast to the extensive requirements imposed on high risk AI systems, the principal obligation for low risk AI systems under the EU AI Act is transparency. There are a number of transparency requirements for specific situations.
- If an AI system interacts directly with a person, the provider must ensure the person is informed that the interaction is with an AI system.
- Deployers of emotion recognition or biometric categorization systems must inform people who are exposed to the system.
- Providers of generative AI systems, including general-purpose AI systems, that create synthetic audio, image, video or text content must mark the output in a machine-readable format that is detectable as artificially generated or manipulated content.
- Deployers of AI systems that create or edit image, audio or video deep fakes, must disclose that the content as been artificially generated or manipulated. "Deep fake" is defined by the EU AI Act to mean an AI-generated or manipulated image, audio, or video content that resembles an existing person, object, place, entity, or event and would falsely appear to be authentic or truthful.
- Deployers of AI systems that generate or edit text which is published for the purpose of informing the public on matters of public interest must disclose that the content has been artificially generated or manipulated, unless the content has gone through human review and is subject to a human-controlled editorial process for publication.
Enforcement
The European Commission's AI Office will be the regulator of AI systems based on a general-purpose AI model where the model and system are made available by the same provider. National regulators in the EU will be responsible for the supervision of all other AI systems.
Individuals can file a complaint with national authorities, but the EU AI Act does not provide for individual damages.
Violations of the EU AI Act are subject to stiff fines. Non-compliance with the prohibited AI systems restrictions under the EU AI Act are subject to fines of the greater of 7% of annual global revenue or 35 million Euros. Most other violations are subject to fines of up to the higher of 3% of annual global revenue or 15 million Euros. Providing incorrect or misleading information to EU authorities is subject to fines of the greater of 1% annual global revenue or 7.5 million Euros.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.