Over the past several years, US lawmakers and government agencies have sought to develop artificial intelligence (AI) strategies and policy with the aim of balancing the tension between protecting the public from the potentially harmful effects of AI technologies, and encouraging positive innovation and competitiveness. As AI technologies become increasingly commercially viable, one of the most interesting challenges lawmakers face in the governance of AI is determining which of its challenges can be safely left to ethics (appearing as informal guidance or voluntary standards), and which suggested approaches should be codified in law.1
Recent years saw a surge in debate about the role of governance and accountability in the AI ecosystem and the gap between technological change and regulatory response in the digital economy. In the United States, this trend was manifested in particular by calls for regulation of certain 'controversial' AI technologies or use cases, in turn increasingly empowering lawmakers to take fledgling steps to control the scope of AI and automated systems in the public and private sectors. Between 2019 and 2020, there were a number of high-profile draft bills addressing the role of AI and how it should be governed at the US federal level, while US state and local governments continue to press forward with concrete legislative proposals regulating the use of AI. Likewise, the European Union has taken numerous steps to demonstrate its commitment toward the advancement of AI technology through funding,2 while simultaneously pressing for companies and governments to develop ethical applications of AI.3 However, in the first half of 2020, the effect of the unprecedented covid-19 pandemic stalled much of the promised legislative progress, and many of the ambitious bills intended to build a regulatory framework for AI have languished in committee and have not been passed.
Nonetheless, US federal, state and local government agencies continue to show a willingness to take concrete positions on the regulatory spectrum, including in light of recent events and social movements, resulting in a variety of policy approaches to AI regulation – many of which eschew informal guidance and voluntary standards and favour outright technology bans. We should expect that high-risk or contentious AI use cases or failures will continue to generate similar public support for, and ultimately trigger, accelerated federal and state action.4 For the most part, the trend in favour of more individual and nuanced assessments of how best to regulate AI systems specific to their end uses by regulators in the United States has been welcome. Even so, there is an inherent risk that reactionary legislative responses will result in a disharmonious, fragmented national regulatory framework. Such developments will continue to yield important insights into what it means to govern and regulate AI over the coming year.
Further, as the use of AI expands into different sectors and the need for data multiplies, legislation that traditionally has not focused on AI is starting to have a growing impact on AI technology development. This impact can be seen in areas such as privacy, discrimination, antitrust and labour-related immigration laws. While some of these areas may help alleviate ethical concerns that AI sometimes engenders (eg, eliminating bias), others may unnecessarily inhibit development and make it difficult to operate (eg, complying with consumer deletion requests under privacy laws or securing the workforce needed to develop AI technology).
The following section in this chapter will discuss the general regulatory framework of AI technology in the United States, contrasting the approach with other jurisdictions that have invested in AI research and development where appropriate, and will highlight differences in how AI technology is regulated by use in various key sectors.
The final section in this chapter will discuss certain areas of existing and proposed legislation and policies that may distinctly affect AI technologies and companies, even though they are not directly targeting them, and what effects may result.
AI-specific regulations and policies – existing and proposed
Legislation promoting and evaluating AI ethics, research and federal policy
Even in 2020, despite its position at the forefront of commercial AI innovation, the United States still lacked an overall federal AI strategy and policy.5 By contrast, observers noted other governments' concerted efforts and considerable expenditures to strengthen their domestic AI research and development,6 particularly China's plan to become a world leader in AI by 2030. These developments abroad prompted many to call for a comprehensive government strategy and similar investments by the United States' government to ensure its position as a global leader in AI development and application.7
In 2019, the federal government began to prioritise both the development and regulation of AI technology. On 11 February 2019, President Donald Trump signed an executive order (EO) creating the 'American AI Initiative',8 intended to spur the development and regulation of AI and fortify the United States' global position by directing federal agencies to prioritise investments in research and development of AI.9 The EO, which was titled 'Maintaining American Leadership in Artificial Intelligence,' outlined five key areas: research and development,10
1 See, eg, Paul Nemitz, Constitutional Democracy and Technology in the Age of Artificial Intelligence, Phil. Trans. R. Soc. A 376: 20180089 (15 November 2018), available at https://royalsocietypublishing.org/doi/ full/10.1098/rsta.2018.0089.
2 The European Commission (EC) enacted a proposal titled: 'The Communication From the Commission to the European Parliament, the European Council, the European Economic and Social Committee, and the Committee of the Regions: Artificial Intelligence for Europe' (25 April 2018), https://ec.europa.eu/digitalsingle-market/en/news/communication-artificial-intelligence-europe. The Communication set out the following regulatory proposals for AI: calls for new funding, pledges for investment in explainable AI 'beyond 2020', plans for evaluation of AI regulation, proposes that the Commission will support the use of AI in the justice system, pledges to draft AI ethics guidelines by the end of the year, proposes dedicated retraining schemes, and calls for prompt adoption of the proposed ePrivacy Regulation. Likewise, an April 2018 UK Select Committee Report on AI encouraged the UK government to establish a national AI strategy and proposed an 'AI Code' with five principles, emphasising ideals such as fairness and developing for the common good – mirroring the EU's AI Ethics Guidelines. 'AI Policy – United Kingdom,' available at https://futureoflife.org/ ai-policy-united-kingdom/?cn-reloaded=1. Early this year, the EC also released the 'White Paper on Artificial Intelligence – A European approach to excellence and trust' on 19 February 2020. The White Paper outlines the EC's proposed comprehensive AI legislative framework to be put into place in late 2020, including investments in data and infrastructure and measures to strengthen the digital rights of individuals. The EC's policy measures aim to foster a European 'data economy' with common European data spaces, and will work with strategic sectors of the economy to develop sector-specific solutions. EC, White Paper on Artificial Intelligence – A European approach to excellence and trust, COM(2020) 65 (19 February 2020), available at https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.
3 High-Level Expert Group on Artificial Intelligence (HLEG), a team of 52 experts who, on 8 April 2019, published 'Ethics Guidelines for Trustworthy AI', available at https://ec.europa.eu/digital-single-market/en/news/ethicsguidelines-trustworthy-ai. Expanding on HLEG's findings, Germany's Data Ethics Commission released its report containing 75 recommendations for regulating data, algorithmic systems and AI. German Federal Ministry for Justice and Consumer Protection, Opinion of the Data Ethics Commission, Executive Summary (October 2019), available at http://bit.ly/373RGqI. This report is a blueprint of binding legal rules for AI in Europe with varying regulatory obligations based on an algorithmic system's risk of harm. Id., at 19-20. The Ethics Commission recommends an overhaul of product liability laws as they pertain to autonomous technologies, such as adding vicarious liability for human operators of algorithmic systems that cause harm. Id., at 10. The Ethics Commission also notices companies using AI software that measures may be taken against 'ethically indefensible uses of data,' which may include 'total surveillance, profiling that poses a threat to personal integrity, the targeted exploitation of vulnerabilities, addictive designs and dark patterns, methods of influencing political elections that are incompatible with the principle of democracy, vendor lock-in and systematic consumer detriment, and many practices that involve trading in personal data.' Id., at 26.
4 See, eg, the House Intelligence Committee's hearing on Deepfakes and AI on 13 June 2019 (US House of Representatives, Permanent Select Committee on Intelligence, Press Release: House Intelligence Committee To Hold Open Hearing on Deepfakes and AI (7 June 2019)); see also Makena Kelly, 'Congress grapples with how to regulate deepfakes', The Verge (13 June 2019), available at https://www.theverge.com/2019/6/13/18677847/deep-fakes-regulation-facebook-adam-schiff-congress-artificial-intelligence. Indeed, after this hearing, separate legislation was introduced to require the Department of Homeland Security to report on deepfakes (the Senate passed S. 2065 on 24 October 2019) and to require NIST and NSF support for research and reporting on generative adversarial networks (HR 4355 passed the House on 09 December 2019).
5 The only notable legislative proposal before 2019 was the Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017, also known as the FUTURE of Artificial Intelligence Act, which did not aim to regulate AI directly, but instead proposed a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence. The Act was reintroduced on 9 July 2020 by Representatives Pete Olson (R-TX) and Jerry McNerney (D-CA) as the FUTURE of Artificial Intelligence Act of 2020. The House bill (HR 7559) would require the Director of the National Science Foundation, in consultation with the Director of the Office of Science and Technology Policy, to establish an advisory committee to advise the President on matters relating to the development of AI. A similar bill in the Senate (S. 3771), also titled FUTURE of Artificial Intelligence Act of 2020, was introduced by bipartisan lawmakers on 20 May 2020 and was ordered to be reported with an amendment favourably on 22 July 2020 after passing the Senate Committee on Commerce, Science, and Transportation.
6 For example, in June 2017, the UK established a government committee to further consider the economic, ethical and social implications of advances in artificial intelligence, and to make recommendations. 'AI – United Kingdom', available at https://futureoflife.org/ai-policy-united-kingdom. It also published an Industrial Strategy White Paper that set out a five-part structure by which it will coordinate policies to secure higher investment and productivity. HM Government, 'Industrial Strategy: Building a Britain fit for the future' (November 2017), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/ attachment_data/file/730048/industrial-strategy-white-paper-web-ready-a4-version.pdf. The White Paper also announced an 'Artificial Intelligence Sector Deal to boost the UK's global position as a leader in developing AI technologies' which the government hopes would increase its GDP by 10.3 per cent. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/730048/ industrial-strategy-white-paper-web-ready-a4-version.pdf. And, in a March 2018 sector deal for AI, the UK established an AI Council to bring together respected leaders in the field, and a new body within the government – the Office for Artificial Intelligence – to support it. https://assets.publishing.service.gov.uk/ government/uploads/system/uploads/attachment_data/file/702810/180425_BEIS_AI_Sector_Deal__4_.pdf
7 Joshua New, 'Why the United States Needs a National Artificial Intelligence Strategy and What It Should Look Like', The Center for Data Innovation (4 December 2018), available at http://www2.datainnovation. org/2018-national-ai-strategy.pdf.
8 Donald J Trump, Executive Order on Maintaining American Leadership in Artificial Intelligence, The White House (11 February 2019), available at https://www.whitehouse.gov/presidential-actions/executive-ordermaintaining-american-leadership-artificial-intelligence.
9 The White House, Accelerating America's Leadership in Artificial Intelligence, Office of Science and Technology Policy (11 February 2019), available at https://www.whitehouse.gov/briefings-statements/ president-donald-j-trump-is-accelerating-americas-leadership-in-artificial-intelligence.
10 Supra note 1, section 2(a) (directing federal agencies to prioritise AI investments in their 'R&D missions' to encourage 'sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-Federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.').
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.