The European Union introduced the AI Act, the world's first comprehensive regulation on artificial intelligence, effective from 1 August 2024. Under this regulation, specific obligations for general-purpose AI models will commence from 2 August 2025. The primary objective of these new rules is to ensure companies operating in the European market utilize artificial intelligence in a safe, ethical, and human-centered manner.
The AI Act classifies AI systems into four risk categories:
- Unacceptable Risk: Applications in this category are strictly prohibited. Examples include social scoring, harmful manipulative AI systems, and real-time facial recognition technologies, which pose direct threats to human rights and freedoms.
- High Risk: Systems significantly impacting people's lives across critical sectors, such as education, healthcare, recruitment, and border security, fall under this category.
High-risk AI systems must comply with stringent rules, including detailed risk assessments, data quality standards, activity logging, human oversight, and robust security measures.
- Transparency Risk: This category includes AI applications where disclosure of AI usage is mandatory. Particularly, users interacting with AI-driven customer service or chatbot systems must be explicitly informed about the use of AI. Furthermore, content generated by AI must be clearly identified.
- Minimal or No Risk: AI applications considered low-risk, such as spam filters or gaming software, are exempt from these regulatory requirements.
The European Commission initiated a voluntary initiative under the AI Act to facilitate early compliance and encourage information sharing among companies.
New obligations applicable to general-purpose AI models (including large language models like ChatGPT, Gemini, Claude, and Grok) emphasize transparency and copyright responsibilities. Additionally, specialized risk management processes are mandatory for models capable of posing high risks.
Implementation and enforcement of the AI Act will be monitored by the European AI Office, with governance support provided by the AI Board, Scientific Panel, and Advisory Forum. AI Act Implementation Timeline:
- 1 August 2024: AI Act enters into force.
- 2 February 2025: Prohibited practices and basic awareness obligations take effect.
- 2 August 2025: New rules for general purpose AI models become applicable.
- 2 August 2026: Full enforcement for high-risk AI systems.
For Turkish companies, compliance with the AI Act presents a significant opportunity to continue operating within the European market. Companies that swiftly adapt and comply will gain a competitive advantage.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.