ARTICLE
23 September 2024

UK Signs First Legally-Binding International Treaty Governing The Safe Use Of AI: Our Analysis

BB
Bird & Bird

Contributor

As you adapt and innovate, you'll need a firm that's hardwired to anticipate and uncover the opportunities in change.

You'll need a firm that will ask the right questions to shape the right objective. And you'll need proactive, practical, and commercially led advice on how to get there.

It's what we do and it's what makes us your go-to firm, whether you're facing disruption or creating it.

You can trust us to know your sector as well as you do, to be curious, and to connect the dots to reach solutions others don’t. And you can trust us to deliver as promised – and more.

We'll work closely with you and make things easier for you. We'll look to deliver new improved solutions and services. And we'll take on your problems as if they were our own. But more than that, we'll work as one seamless international team across our business and yours. Giving you access to a whole world of expertise.

On Thursday 5 September 2024, the UK signed a new legally-binding international treaty governing the safe use of AI. Other signatories included the US and EU.
Worldwide Technology

On Thursday 5 September 2024, the UK signed a new legally-binding international treaty governing the safe use of AI. Other signatories included the US and EU.

Officially known as the "Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law" (the "AI Convention"), the AI Convention aims to "ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law" (Article 1).

Its main focus is to protect human rights, democracy and the rule of law from the risks posed by AI by providing an international legal standard of obligations and principles to be followed by states across the world. For example, to protect democracy, Article 5 requires signatory states to adopt measures to ensure that AI systems are not used to undermine democratic institutions and processes. Article 4 requires signatory states to ensure that AI systems are used in accordance with that state's international and domestic human rights law. Other provisions seek to ensure that the use of AI respects human dignity, equality and privacy.

The AI Convention text was adopted by the Council of Europe on 17 May 2024, having taken two years to draft. It was written by the 46 Council of Europe member states (of which the UK is one), the EU, and 11 non-member states including Australia, Japan and the US.

Each signatory state is expected to adopt or maintain measures to give effect to the requirements in the AI Convention.

The treaty forms part of the new regulations, pledges and agreements being developed by governments across the world to regulate the risks arising from rapid advancements in AI. It follows in the footsteps of Biden's Executive Order on AI (October 2023), the Bletchley Declaration (November 2023), the US and UK safety institute collaboration (April 2024) and the King's speech announcement that the UK government plans to introduce AI legislation on the most powerful AI models (July 2024). Many of the principles in the AI Convention chime with concepts in the EU AI Act which came into force on 1 August 2024, such as transparency for AI-generated content, oversight requirements and accountability. Whilst China is not a signatory to the AI Convention, it has introduced its own AI-related measures and also signed the Bletchley Declaration.

Our Analysis

The signing of the AI Convention has been hailed by human rights supporters and proponents of the global governance of AI as a landmark achievement.
Whilst the adoption of the AI Convention is a welcome development, there are some issues that those in the 'AI lifecycle' should be aware of:

1. Scope

The principles and obligations in the AI Convention apply to public authorities - including private actors acting on their behalf – and private actors.

However, under Article 3 (scope) signatory states have a choice as to how the AI Convention applies to private actors. They must choose whether to apply the principles and obligations directly to the activities of private actors or whether to take "other appropriate measures".

It is likely drafted like this in order to cater for the differences in the signatories' legal systems across the world. But it could lead to discrepancies between how the AI Convention is applied to private actors in different signatory states across the globe. This could cause confusion for private companies operating on a global scale. Also, "public authority" is not defined, probably for similar reasons, but this could cause issues when applying the principles of the AI Convention in practice.

2. Broad principles rather than specific requirements

This is to allow signatory states to interpret the AI Convention in accordance with their own legal, political and social traditions (see Articles 7-13). However, this will result in national regulation transposing the AI Convention to vary widely.

3. Vague compliance structure

There is only a vague compliance mechanism. Compliance reporting is required (Article 23) but there are no strict enforcement criteria and so the effectiveness and impact of the AI Convention could be limited.

4. Remedies

The AI Convention does require signatories to provide remedies for breaches of human rights in relation to the obligations and principles set out in the AI Convention and ensure that a body is in place for persons to lodge complaints. However, no remedies (such as fines) have been suggested and any remedies legislated for on a national level could vary widely between jurisdictions.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More