One of the most significant technological (and much-discussed) developments over the past 18 months has been the rise of Artificial Intelligence (AI) tools, such as ChatGPT. As with many technological advances, there is an initial flurry of activity and excitement as people and industry discover what it can do and be used for, followed by the discussion of what the technology should (and, equally importantly, ought not) be used for.

As governments and regulators become ever more alive – and seek to answer – these questions, we take a look at the current and future statutory and regulatory landscape in the UK, the EU and the US.

UK

The UK is taking steps to develop a comprehensive approach to AI regulation, with a focus on promoting innovation while addressing potential risks.

Notable initiatives include:

1. AI Council and AI Roadmap: The UK established the AI Council, an independent government advisory body, to promote the adoption and ethical development of AI. The Council published an AI Roadmap in 2021, outlining the country's strategy for AI development, including regulatory considerations.

2. Centre for Data Ethics and Innovation (CDEI): The CDEI is an independent advisory body that provides recommendations on the ethical use of AI and data. They have published reports on various AI-related topics and provide guidance to policymakers and industry.

3. Data Protection: The General Data Protection Regulation (GDPR) and the UK's Data Protection Act 2018 govern the use of personal data, including data used in AI systems. These regulations require transparency, consent, and data minimisation when processing personal data.

4. Existing Remedies and Case Law: Extensive remedies already exist in contract, tort, equity, intellectual property, equality and anti-discrimination legislation; human rights and data protection which can provide adequate legal protections. Different business, social media, health and other sectors will present different risks and remedy issues. Cases are being decided in various jurisdictions around the issues raised by generative AI and the consequences for existing IP as well as who will own the IP of newly developed systems and products. For example, in 2021 the English Court of Appeal held (and the UK Supreme Court subsequently agreed) that a machine was not and could not be an 'inventor', as defined under the Patents Act 1977, for the purposes of registering a patent.

5. Organisation for Economic Cooperation and Development (OECD): The OECD set up a Global Partnership on AI (GPAI) in 2020, that aims to ensure that AI is used responsibly, respecting human rights and democratic values.

6. Studies and discussion documents looking at a pro-innovation AI strategy from the UK Government, including the March 2023 papers issued by the Government's Chief Scientific Adviser and followed by the White Paper consultation.

Whilst the use of AI is not novel in the legal industry (all litigators are now very familiar with AI tools to assist in disclosure and document review exercises), recent developments have created possible challenges in the legal sector, specifically in relation to the discriminatory impact that AI could harness, potentially breaching the Equality Act 2010 and/or the Human Rights Act 1998. A disadvantaged candidate may now be able to argue that an employer which bases its recruitment, retention, promotion or appraisal processes/decisions on algorithm-based decision has unlawfully directly or indirectly discriminated against them.

The employer may be obligated to prove that it did not discriminate or that the indirect discriminatory impact of the algorithm is justified. In reality, the employer may lack an understanding of how the algorithm operates. We will likely see more cases involving this nature if AI integrates further into the employment sector.

EU

The EU has been actively working on AI regulations to address ethical concerns, promote trust, and ensure fundamental rights are protected. In April 2021, the European Commission proposed the Artificial Intelligence Act, which is expected to shape future AI regulations within the EU. Key elements of the proposed regulation include:

1. Risk-Based Approach: The regulation categorises AI systems into four levels of risk (unacceptable, high, limited, and minimal) and imposes stricter requirements on higher-risk systems.

2. Prohibited Practices: The proposed regulation prohibits AI systems that pose significant risks, such as those used for social scoring, manipulative techniques, and real-time facial recognition in public spaces (with some exceptions).

3. Transparency and Accountability: It requires high-risk AI systems to provide transparency on their functionality, explainability, and data used. Developers must maintain documentation, record-keeping, and carry out risk assessments.

4. The EU's AI Act proposes a full ban on the use of AI in the domains of biometric surveillance, emotional recognition, predictive policing, which is a positive step in constraining the exponential expansive use of AI. This may indicate an emerging global standard and one which the UK may decide to follow.

5. The US and EU are continuing to have cooperation talks on AI regulation which may signal the emergence of global AI regulatory standards ahead of the formulation by the UK.

There have been a number of developments suggesting that EU regulators are becoming more proactive with current AI systems. For example, the Irish Data Protection Commission delayed the rollout of Google's 'Bard' chatbot as a result of the lack of privacy risk assessments.

Additionally, the Italian Data Protection Authorities (the Garante) decided to temporarily prohibit ChatGPT from processing any Italian citizens personal data after finding that there was a material risk that ChatGPT would breach the GDPR on a number of grounds. Although ChatGPT has since made the appropriate regulatory changes and has resumed normal service in Italy, it shows that the EU is taking proactive measures to uphold transparency and privacy in the AI network.

As a result of the risks associated with chatbots, the European Consumer Organisation called for EU and national authorities to launch an investigation into ChatGPT and similar chatbots. Nevertheless, if the emerging AI Act is successfully implemented, it could eliminate their concerns.

USA

In the USA, AI regulations are primarily sector-specific, focusing on areas such as data privacy, algorithmic fairness, and safety. However, there is currently no comprehensive federal legislation specifically addressing AI. Instead, regulatory efforts are fragmented across different agencies and states. Some notable initiatives include:

1. Federal Trade Commission (FTC): The FTC enforces laws relating to consumer protection and has issued guidelines on AI use, emphasising transparency, fairness, and accountability in decision-making processes.

2. Sector-Specific Regulations: Various sectors, such as healthcare, finance, and transportation, have specific regulations that may indirectly impact the use of AI systems within those industries.

3. State-Level Regulations: Some states have implemented laws relating to AI, such as data breach notification requirements and limitations on the use of facial recognition technology.

4. The US Copyright Office (USCO): The USCO issued a Statement of Policy outlining the difficulty to accept registration of AI-generated work and highlighted the importance of disclosing the fact that AI has been involved in creating the work. However, its position has been challenged by recent US cases, such as Thaler v Perlmutter, which focused on whether AI-generated works should be entitled to copyright protection and where to draw the line when it comes to registrability involving AI. Additionally, the case of Anderson v Stability AI Ltd, a class action brought in relation to an alleged copyright infringement against an AI model copying directly from artists without consent.

California and a number of other states are progressing with state-level AI regulations. These regulations and new developments surrounding AI encouraged the White House Office of Science and Technology Policy (OSTP) to publish a 'Blueprint for an AI Bill of Rights' in October 2022. This now acts as a non-binding guide for citizens, highlighting the risks associated with new technologies.

Future Outlook

The legal and regulatory landscape for AI is rapidly evolving, and it is likely that further regulations will be developed and refined in the UK, EU, and USA.

Current laws impacting data (GDPR); intellectual property (principally copyright) as well as evolving negligence (tort) and contract liability/equity claims and remedies will be essential to properly regulating the different areas that AI will impact now and in the future.

The proposed EU Artificial Intelligence Act, if enacted, could significantly shape AI regulations within the EU and potentially influence global AI governance.

We await the outcome of the UK Government's deliberations and, for the present, it has stated that it has 5 key principles that underly its AI regulatory framework:

1. Safety, security and robustness.

2. Appropriate transparency and explainability.

3. Fairness.

4. Accountability and governance.

5. Contestability and redress.

W Legal's Digital Assets team responded to the Government's White Paper in 2023 looking at a Pro-Innovation Approach to AI Regulation, conveying our thoughts and ideas on their proposals. One crucial issue that we addressed was disclosure. We emphasised that disclosure alone is just one part of the regulatory process, and complete transparency is essential. We urged the Government to consider the need for organisations to clearly disclose the type of AI processing taking place and whether decisions are solely made by automated means. It is essential to address these considerations to ensure effective regulation in the ever-evolving landscape of AI. We look forward to seeing how the Government implement these proposals, if at all.

It is important to stay updated on the latest developments, as governments, international organisations, and industry stakeholders continue to explore and establish legal frameworks to address the ethical, privacy, safety, and accountability concerns associated with AI and please do contact the Digital Assets Team at W Legal to follow up in this AI area and related areas and we will be happy to discuss, review and offer training in these fast developing regulatory and compliance areas.

W Legal's Digital Assets team is here to help guide clients through the impact of AI on specific sectors and in specific jurisdictions as well as across borders, in areas such as employment; financial and commercial fields as well the interaction with other new IT areas of digital assets (including so-called 'crypto-assets'); cyber-security, blockchain and distributed ledger, AML and smart contracting.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.