The AI Agenda, our flagship AI conference brought together Lewis Silkin's Co-Heads of Data, Privacy & Cyber, Alexander Milner-Smith and Bryony Long, along with Sophia Ignatidou, Group Manager for AI Policy at the Information Commissioner's Office, for an interesting discussion on global AI regulation and AI governance strategies.
With the landscape of AI regulation rapidly evolving around the world we are starting to see a greater divergence in approach. The EU has taken a statutory line with the EU AI Act. In the UK, a sectoral, guidance based approach is favoured. Although AI legislation for the most advanced AI models was included in the King's Speech in July 2024, no Bill has materialised and the UK has shifted its focus to the AI Opportunities Action Plan, with the aim of maintaining the UK's position in the global AI market by building infrastructure, upskilling the workforce and investing in AI. Meanwhile in the US, the new administration has signalled a change in approach with an Executive Order in January 2025, emphasising innovation and competition. This was followed by two memos in April 2025 aimed at streamlining AI use and procurement processes in the federal government, then came the Executive Order on AI Literacy to "cultivate the skills and understanding necessary to use and create the next generation of AI technology".
The panel discussed the latest proposal by the Trump Administration, a ten year moratorium on state level AI buried in Sec. 43201(c) of the One Big Beautiful Bill Act (OBBBA), if this becomes law all US states would be blocked from enforcing laws regulating AI and automated decision systems for the next decade. What will this mean for the New York City's local law on automated decision making in employment, Illinois' AI Video and Interview Act or even Colorado's AI Act? Will this be a monumental moment triggering a substantive shift in global AI regulation, or will the proposals fail or be removed from the final legislation? This is one to watch!
This pro-business, deregulatory stance in the US has created a ripple effect – innovation, competition and economic rewards are now the drivers in the US and other countries seem to be following suit. This divergence in regulatory approaches presents challenges for multinational organisations.
That said, the panel went on to discuss common themes in AI regulation around the world, namely:
- Consultation about use of AI – many jurisdictions, e.g. Germany, have a requirement to consult with employees/unions about the use of workplace AI systems.
- Accountability and governance – what are the specific roles, frameworks, policies or reporting structures required to manage the use of AI? What existing structures can you use/adapt/expand?
- Impact assessments – what are the requirements to undertake impact assessments prior to use? Multinational businesses will have a lot to build on through their DP frameworks and risk assurance programmes.
- Auditing and monitoring - what steps are required to ensure that the AI system is safe to use? This is jurisdiction agnostic – you need to ensure any hallucinations are dealt with and that there is no model creep, while it was acknowledged it can be difficult to get businesses to focus on the essential continual loop.
- Transparency and explainability - what level of openness is required about the use of AI and/or how the system's decisions are reached? Following the ICO Gen AI consultations and outcomes report this is one of the areas that the UK regulator knows is challenging and one where companies often fail to get the balance right. Allowing data subjects' to exercise their rights is essential and the ICO in its bid to support innovators is open to collaborating with companies to resolve such issues.
- Human oversight and intervention – what human involvement is required? We've had Article 22 of the GDPR/UK GDPR for a number of years and there are other laws around the world on automated decision making – what lessons can we learn from existing governance frameworks that have already had to address such issues?
- Contestability – are measures required to enable an individual affected by an AI decision to be able to challenge it effectively?
In the UK, we are very fortunate in that the ICO has been proactive in the AI regulatory space, providing practical key resources for compliance, access to a regulatory sandbox, collaborating with the Digital Regulation and Co-operation Forum through the AI and Digital Hub and other initiatives and undertaking voluntary audits where a collaborative approach is taken so companies can get it right. The ICO wants to build relationships with industry and ensure the right behaviour. Of course enforcement is always an option should it be needed but meaningful engagement, good governance and genuine attempts to do the right thing are recognised, e.g. your Data Protection Impact Assessment (DPIA) should be a robust, well considered assessment that genuinely identifies the risks and the mitigations put in place to manage them.
The ICO is also working on an AI and Biometrics Strategy – more on that over the Summer – and an AI and automatic decision making statutory code of practice which is a longer term project but with opportunities for businesses to engage and help shape the regulator's thinking in this area.
Looking to the future, the panel acknowledged it was challenging to future-proof AI governance frameworks companies are putting in place now but agreed an anticipatory approach is the way forward, i.e. getting the right people involved (and supporting them through upskilling) to be part of a multi-disciplinary team to address AI governance, robust internal AI policies for employees and, perhaps most important of all, agree on a way forward! Don't over-simplify but don't slow things down by involving the whole organisation. Make sure you empower your people – invest in AI literacy – it will help your employees to feel interested, excited and able to spot opportunities, rather than being uncertain or not wanting to engage. Taking this sort of approach also has the added benefit of building a responsible AI culture.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.