Artificial intelligence (AI) has captured widespread public attention this year following ChatGPT's launch in late 2022, yet governments worldwide have pondered AI regulation for several years. AI has powered many now-common technologies such as search engines, navigation systems, personalized content recommendations, and a host of business applications, and it promises exciting new developments in perhaps every sphere of human activity. The more popular AI has become, however, the more apparent it is that AI presents serious risks from inaccuracy to discrimination, and 'deepfakes' and disinformation to undermining intellectual property (IP) rights, among many others.

On October 30, 2023, following months of discussions, the G7 nations (Canada, France, Germany, Italy, Japan, the UK, and the US), as well as the EU, agreed to the voluntary Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (the Code of Conduct). Peter J. Schildkraut and Paula Millar, from Arnold & Porter, delve into the Code of Conduct and explore the different approaches towards stricter AI regulation in the EU and US.

The Code of Conduct comprises a set of relatively high-level principles rather than concrete requirements and only applies to developers of advanced AI systems – foundation models and generative AI.

It also does not apply to developers of other AI systems or businesses deploying AI systems, advanced or otherwise. Although G7 leaders are generally aligned on the need to mitigate AI's risks and on the desirability of regulatory regimes being interoperable across jurisdictions – that compliance with one equates with compliance with all – the relatively restrained ambitions of the Code of Conduct should give global businesses little optimism that regulatory interoperability will emerge anytime soon.

The current landscape for privacy laws, particularly within the US but elsewhere as well, illustrates how complicated the regulatory framework for AI might become for multinational companies to navigate. The rapid development of privacy statutes and regulations in the past decade has led to inconsistent requirements across jurisdictions, creating potential confusion among both businesses and consumers, and has diverted resources to compliance that companies could have invested in new products and services. Some businesses bite the bullet and expend time and resources to create policies and procedures to manage divergent requirements across borders. Others may self-select out of operating in certain jurisdictions to avoid the added complexity. To keep this dynamic from playing out in AI regulation too would require greater willingness from governments to compromise on their regulatory preferences than has emerged to date.

EU and US approaches

The EU and the US currently approach AI governance differently. Animated by the precautionary principle of proving new technologies are safe before letting them on the market, the EU is close to adopting its Artificial Intelligence Act (AI Act) – the first broad AI regulation apart from China's. Generally more reluctant to intervene absent demonstrated harms, US leaders have not kept pace with their European counterparts and seem headed towards more targeted rules. Despite consensus on the urgency of global AI rules, the EU and US approaches are difficult to harmonize beyond high-level principles.

EU approach

The EU approach to AI governance centers on the AI Act while also building on various recent laws, most notably the General Data Protection Regulation (GDPR). The European Commission proposed the AI Act in April 2021. The Commission and the EU's co-legislators, the Council of the European Union, and the European Parliament reached a political compromise among their differing versions on December 8, 2023, although the final text continues to be negotiated ahead of votes on adoption in the first part of 2024.

The AI Act will regulate AI systems according to risk level and will impose highly prescriptive rules on systems used in cases considered to be high-risk. Providers and deployers of high-risk systems will face numerous obligations with respect to risk-management systems, datasets used for training, validation, and testing, as well as recordkeeping and technical documentation requirements. The AI Act will also mandate that high-risk systems undergo assessments of their conformity with the Act before providers may put them into the EU market. The AI Act will charge the European Committee for Standardization (CEN) and European Committee for Electrotechnical Standardization (CENELEC) with establishing harmonized EU standards consistent with these requirements. As adherence to the CEN/CENELEC standards will be a safe harbor for providers and deployers of high-risk systems, these standards should offer a pathway for companies to reduce their compliance burden.

What the AI Act will demand of general-purpose AI systems, foundation models, and generative AI systems was one of the last issues on which the EU institutions reached a political compromise. In the end, they settled on imposing additional obligations on providers of foundation models, with tiers of requirements depending on various criteria. Providers of all foundation models will have to provide detailed summaries of their training data, among other obligations. Providers of proprietary systems will face heavier burdens than providers of free and open-source models with publicly available parameters.

Ironically, the very prescriptiveness of the AI Act may cause companies to treat compliance as a check-the-box exercise instead of devoting their resources to managing risks to mitigate the harms the regulation is supposed to prevent.

The AI Act will have a broad extraterritorial scope, sweeping into its purview providers and deployers of AI systems regardless of whether they are established in the EU. As a result, it is prudent for businesses serving the EU market and selling AI-enabled products or services, or deploying AI systems in their operations, to prepare for compliance.

While the AI Act is expected to become law soon, its provisions will not have effect for another six months to two years thereafter.

In the meantime, European data protection authorities have been using the GDPR to regulate AI within their jurisdictions. For example, in March 2023, the Italian data protection authority (Garante) banned one leading generative AI service after determining that it did not adhere to the GDPR. The ban was lifted several weeks later after satisfaction of the agency's immediate conditions, but the Garante's investigation continued. Subsequently, data protection authorities in France, Germany, the Netherlands, Poland, and Spain acknowledged opening their own investigations into or receiving complaints about that service's compliance with the GDPR, and the European Data Protection Board (EDPB) launched a task force to investigate the service as well. Additionally, another European body – the European Data Protection Supervisor (EDPS) – has a separate task force to ensure generative AI tools adhere to the GDPR's strictures.

More broadly, the GDPR imposes a number of obligations on companies developing or deploying AI systems – or using third-party AI-enabled services – trained or operating on personal data. EU data protection authorities have focused on AI over the past year, and companies should expect this scrutiny to continue ahead of the AI Act's enforceability.

US approach

At the federal level, the US has taken a risk-based, sector-specific approach to AI governance, relying on agencies to police AI under their existing legislative authorizations. The Federal Trade Commission (FTC), the Equal Employment Opportunity Commission (EEOC), the Consumer Financial Protection Bureau (CFPB), and the Department of Justice's Civil Rights Division, among others, have asserted their current authority to protect against various AI harm in the sectors they oversee. As FTC Chair Lina Khan has explained, 'Technological advances can deliver critical innovation – but claims of innovation must not be cover for lawbreaking.
There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.'

In practice, this reliance on existing, sector-specific statutes diverges from the EU approach in two fundamental ways. First, regulation varies across sectors in the US while the AI Act and GDPR apply more uniformly. Of greater significance, however, is that the existing statutes generally tell businesses the outcomes to avoid with their AI systems (e.g., do not discriminate unlawfully in employment or credit), but, unlike the AI Act in particular, these US federal statutes from the pre-AI era mostly do not detail what companies must and must not do to prevent harmful outcomes.

Notwithstanding this sectoral approach, Washington has taken some steps toward comprehensive AI regulation.

In October 2022, the Biden administration unveiled its Blueprint for an AI Bill of Rights. The AI Bill of Rights set forth five principles for developing and deploying AI systems while protecting individual rights and society. Although these principles are guiding various agencies in developing new, enforceable rules under their existing statutory authorities, the AI Bill of Rights bestows no additional enforcement power on any agency.

In another such step, the US National Institute of Standards and Technology (NIST) published the first version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0), as well as the companion AI RMF Playbook, in the first quarter of 2023. NIST, which has no regulatory authority, designed AI RMF 1.0 to '[b]e law- and regulation-agnostic,' adaptable to any set of substantive requirements. Moreover, use of AI RMF 1.0 is voluntary although there have been calls for agencies with regulatory powers to mandate its use as they adopt new rules.

In July 2023, the White House announced that seven leading AI developers pledged to comply with safety commitments to prevent harm from their systems. Eight additional tech companies signed the commitments two months later. While framed as voluntary, having been made publicly the commitments potentially became enforceable by the FTC under its general authority to punish unfair or deceptive trade practices. In somewhat analogous situations, the FTC has sanctioned companies that failed to adhere to their voluntarily adopted privacy policies.

Building upon the AI Bill of Rights and the voluntary safety commitments, President Biden signed an executive order on safe, secure, and trustworthy AI on October 30, 2023 (Executive Order), the same day as the adoption of the G7 Code of Conduct. The Executive Order seemingly addresses the full spectrum of challenges wrought by AI – from national security to equity and civil rights, privacy protection to synthetic content, competition to criminal justice, and healthcare to human capital needs.

Asserting presidential powers under the Defense Production Act, the Executive Order imposes reporting requirements on developers of cutting-edge, 'dual-use' AI foundation models 'that pose a serious risk to [national] security, national economic security, national public health or safety, or any combination of those matters.' Those developers will have to notify the Government when they are training their models. They will also have to provide the Government with the results of their safety testing – to be performed under standards that NIST will develop – before they release their models publicly.

Apart from these narrowly applicable mandates, the Executive Order requires or urges various agencies to develop rules or guidance to combat particular AI harm. Some of these efforts are ongoing, like the FTC's 'commercial surveillance' proceeding (which broadly addresses privacy, cybersecurity, and automated decision-making) and the banking regulators' proposed regulation of mortgage lenders' automated valuation models. Others will be kick-started by the Executive Order.

Companies operating in the US should be gearing up their AI risk-management efforts in anticipation of new rules in a variety of sectors. On the whole, though, any new requirements will likely be far less prescriptive than the EU AI Act. Nor should companies expect Congress to adopt statutes with significant numbers of detailed mandates. AI regulation has been among Senate Majority Leader Chuck Schumer's top legislative priorities for this session of Congress, and he is pushing his colleagues rapidly up a steep learning curve. The end result, however, will likely give businesses much greater flexibility in developing and deploying AI systems than they will have in the EU. In discussing AI regulation in September 2023, Senator Schumer cautioned that '[i]f you go too fast, you can ruin things.' The EU went 'too fast,' he added. In one leading proposal, Senators John Thune and Amy Klobuchar have released a bipartisan bill they advertise as having a 'light touch' to support innovation while still increasing transparency, accountability, and security of higher-risk AI applications.

Hiroshima Process and Code of Conduct

Against this backdrop of differing regulatory philosophies, G7 leaders agreed in May 2023 to create a multilateral ministerial forum (the Hiroshima AI process) for discussing and developing international guidelines covering generative AI. Later that month, at the close of the fourth EU-US Trade and Technology Council (TTC) ministerial meeting, US and EU officials announced plans to draft a code of conduct to propose to the G7 for potential adoption. Reportedly, the EU sought to have the code of conduct track the AI Act's requirements, which the US resisted.

Ultimately, what the G7 leaders agreed to on October 30, 2023, were the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems and the Code of Conduct, based on those principles.

The Code of Conduct, which only applies to the most advanced AI systems, comprises a non-exhaustive list of 11 encouraged actions for developers of these systems:

  • take appropriate measures throughout the development of advanced AI systems to identify, evaluate, and mitigate risks across the AI lifecycle;
  • identify and mitigate vulnerabilities, incidents, and patterns of misuse after deployment;
  • publicly report advanced AI systems' capabilities, limitations, and domains of appropriate and inappropriate use, to support ensuring sufficient transparency;
  • work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems;
  • develop, implement, and disclose AI governance and risk-management policies grounded in a risk-based approach;
  • invest in and implement robust security controls across the AI lifecycle;
  • develop and deploy reliable content authentication and provenance mechanisms to enable users to identify AI-generated content;
  • prioritize research to mitigate societal, safety, and security risks and prioritize investment in effective mitigation measures;
  • prioritize the development of advanced AI systems to address the world's greatest challenges; advance the development and adoption of international technical standards; and
  • implement appropriate data input measures and protection for personal data and IP.

The Code of Conduct itself says, 'Different jurisdictions may take their own unique approaches to implementing these actions in different ways.' In other words, beneath these high-level principles, no consensus exists on how to regulate even the most advanced AI technologies.

The bottom line

So where does this lack of consensus leave companies developing and deploying AI systems across jurisdictions?

In the short term at least, multinational businesses will have to comply with the most prescriptive legal mandates, or they will have to geofence their products and services to avoid markets with requirements they cannot meet. Putting a finer point on this observation, US and other non-European businesses will confront a choice between adhering to all the prescriptions of the AI Act or being locked out of the European market. While the CEN/CENELEC standards (once they are adopted) may ease the burden of complying, it is doubtful they will erase it entirely. As a result, the AI Act will probably dampen European and US innovation alike – notwithstanding American policy preferences. European companies, of course, will have no choice but to comply. They may find themselves hampered in competing in other markets against non-European companies that have chosen to forgo European sales and, thus, can escape the AI Act's strictures.

Over the medium term, the international standards process (of which the CEN/CENELEC process is but a component) may produce harmony at a technical level that has been absent so far at the political level. Widely accepted international standards for 'safe, secure, and trustworthy AI' (language used in both the US Executive Order and the G7 documents) could level the compliance burdens across jurisdictions.

Alternatively, greater experience or philosophical shifts in one jurisdiction or another could forge a deeper political consensus in the G7 than currently exists and yield truly interoperable regulation. In the meantime, however, companies developing and deploying AI systems need risk-management programs capable of satisfying all the requirements of the various jurisdictions in which they operate.

*This article was first published in OneTrust DataGuidance on December 8, 2023.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.