ARTICLE
3 December 2024

The First Draft General-Purpose AI Code Of Practice: Transparency And Acceptable Use Policies

LP
Logan & Partners

Contributor

Logan & Partners logo
Logan & Partners is a Swiss law firm focusing on Technology law and delivering legal services like your in-house counsel. We are experts in Commercial Contracts, Technology Transactions, Intellectual Property, Data Protection, Corporate Law and Legal Training. We are dedicated to understanding your industry and your business needs and to deliver clear and actionable legal services.
The EU AI Office has recently published the first draft of the General-Purpose AI Code of Practice, with the final version expected by 1 May 2025.
European Union Technology

The EU AI Office has recently published the first draft of the General-Purpose AI Code of Practice, with the final version expected by 1 May 2025. This document provides essential guidance for businesses developing or deploying general-purpose AI models, helping them align with the requirements of the EU AI Act. The Code outlines measures to promote transparency, mitigate systemic risks, and ensure compliance with legal obligations, all with the aim of fostering a safe and responsible AI ecosystem.

Overview of the Measures

The General-Purpose AI Code of Practice takes a structured approach to compliance, offering practical measures in several key areas:

  • Transparency: Ensuring clear documentation of AI models and providing guidance for downstream users and supervisory authorities.
  • Copyright Compliance: Guaranteeing that AI training and outputs respect intellectual property rights.
  • Systemic Risk Management: Establishing frameworks for identifying, assessing, and mitigating risks.
  • Governance: Encouraging robust oversight and accountability mechanisms within organisations.

These measures are designed to address risks proportionately, providing flexibility for smaller businesses while maintaining stringent standards for high-risk applications.

Transparency in the General-Purpose AI Code of Practice

Transparency is a core principle of the Code, ensuring that providers of general-purpose AI models operate in a way that is clear, accountable, and aligned with the EU AI Act. Transparency measures aim to facilitate the responsible integration of AI models into downstream applications, mitigating potential risks by requiring clear documentation and communication.

Among these measures, the Acceptable Use Policy is particularly important. It establishes rules for how AI models can and cannot be used, guiding downstream providers, and ensuring compliance with the requirements of the EU AI Act.

Acceptable Use Policy

The Acceptable Use Policy is a set of rules that define permissible and prohibited uses of AI models. It is an essential tool for businesses to prevent misuse and protect against liability. The policy ensures that AI providers comply with the EU AI Act while offering users clear boundaries for deploying AI models responsibly.

Key Components of Acceptable Use Policy

The Acceptable Use Policy is structured around the following core components:

  • Purpose and Scope: Include a clear purpose statement explaining why the policy exists and a description of its scope, which defines who the policy applies to and what resources it governs. This ensures that users understand the framework and applicability of the policy.
  • Primary Intended Uses and Users: Describe the primary uses for which the AI model is designed and the intended user groups. This ensures that the model is deployed only for purposes aligned with its design.
  • Acceptable Uses: Provide a list of the tasks and contexts where the AI can be used. This section focuses on practical applications, helping users understand how the model should be integrated into their systems.
  • Prohibited Uses: Outline activities that are forbidden to prevent harm or misuse. These may include generating harmful or illegal content, discriminatory practices, or high-risk applications without safeguards.
  • Security Measures: Specify the security protocols users must follow to ensure the AI model is used responsibly. This could include access control measures, encryption, or other safeguards to protect the model and its outputs.
  • Monitoring and Privacy: Explain the monitoring practices, detailing how the model's use is tracked to detect misuse. This section must also address the impact of such monitoring on user privacy.
  • Warning Processes and Non-Compliance: Outline how violations will be handled, including processes for issuing warnings and suspending or withdrawing access for non-compliance.
  • Termination Criteria: Define clear conditions under which user accounts or access to the AI model may be terminated. This section must reference applicable laws and regulations for enforcement.
  • Acknowledgement of Compliance: Users must formally acknowledge that they have read, understood, and agreed to the policy, ensuring accountability, and fostering transparency throughout the AI value chain.

How we can help

Acceptable Use Policies provide a clear framework for guiding the ethical and lawful use of AI models. By addressing these components, businesses can ensure compliance, mitigate risks, and foster trust in their AI operations. Schedule a complimentary 20-minute call with our lawyers to discuss how these requirements impact your business and how we can support your compliance and operational goals.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More