The last couple of weeks have seen several developments on approaches to AI regulation in the US/EU. Despite debate around the so-called 'Trump effect' and calls for the enforcement of EU AI regulation to be delayed, the European Commission has reaffirmed its existing regulatory timeline, while the US has removed a proposed moratorium on state level AI regulation.
What's happening in the US?
On 1 July 2025, the US Senate voted (with a 99-1 majority) to strip a controversial provision from the 'One Big Beautiful Bill Act', that would have blocked state-level AI regulation via a 10-year moratorium. The provision had backing from major tech companies who argued that a fragmented regulatory landscape would hinder AI innovation.
This approach would have simplified compliance for businesses by creating a unified federal regulatory landscape, providing greater legal certainty, especially for international organisations who are handling converging AI regulation across jurisdictions.
However, according to those within the US Senate who opposed the provision, this would also likely have resulted in weaker protection in several areas affected by AI, including consumer rights, privacy, and children's safety.
Unsurprisingly, this development has invited similar criticism to that often directed at EU AI regulation, in that it may hinder AI innovation in the US.
What about the EU?
Meanwhile, the European Commission has maintained its approach, dismissing recent pressure from large tech companies for delays to the EU AI Act implementation timelines. Thomas Regnier, (a Commission spokesperson) recently stated that "There is no stop the clock. There is no grace period. There is no pause.", so thereforethe EU AI Act's implementation timeline remains unchanged, with general-purpose AI model obligations and the provisions on penalties imminently due to apply from 2 August 2025.
This development suggests that the 'Brussels effect' continues to exert influence, notwithstanding recent geopolitical shifts and clear industry resistance. While some may have anticipated that evolving political dynamics (i.e., the 'Trump effect') might weaken the EU's regulatory leadership in this space, the emerging position indicates a convergence toward more interventionist approaches to AI regulation.
And in the UK?
At present, the UK remains quiet on this front, with no significant legislative developments. After the protracted 'Ping-Pong' stage of the Data (Use and Access) Bill, where the AI and copyright debate took centre stage (and with the compromises reached to see that Bill progress to Royal Assent) a new, more comprehensive AI regulation Bill is now expected.
As for timings, the Secretary of State for Science, Innovation and Technology, Peter Kyle, stated that the government is postponing the introduction of a comprehensive AI regulation Bill until the next parliamentary session, and although at the time of writing the date for the King's Speech has not been set, it is expected to be in May 2026, so we will not see any UK AI legislation any time soon.
The European Commission's General-Purpose AI Code of Practice
The European Commission has also published the final version of the General-Purpose AI Code of Practice, intended to support compliance with the EU AI Act's provisions on general-purpose AI models (which take effect from 2 August 2025). While compliance with the Code is voluntary, the Commission is encouraging providers of general-purpose AI models in the EU to follow it and has stated that providers who voluntarily sign up to the Code will be able to demonstrate compliance with the relevant AI Act obligations by adhering to it. Therefore, signatories to the Code will benefit from a reduced administrative burden and increased legal certainty compared to providers that prove compliance in other ways.
The Code is structured around three themes: transparency, copyright, and safety and security and includes a 'model documentation form' to assist with transparency obligations, practical guidance on implementing copyright compliance policies, and for some providers, risk management measures for models deemed to pose systemic risk (adding to what is already reflected in the EU AI Act).
The Code does not address all open questions. It does not, for example, include model contractual clauses, clarify downstream liability, or mandate licensing of training data. These omissions may make it more difficult for smaller developers and open-source communities to use the Code as a sufficient compliance tool in isolation.
Further Commission guidance is expected in the coming weeks, particularly on scope and applicability, and will be key in determining whether the Code is a genuine simplification device or merely an early blueprint. For now, providers of general-purpose AI would be well advised to treat the Code as a minimum reference point and to begin updating their documentation, copyright clearance strategies, and internal governance accordingly.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.