Published in The Journal of Robotics, Artificial Intelligence & Law  May-June 2018

In December 2017, a bipartisan group of U.S. senators and representatives introduced the Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017 (the "Act").1 Although the Act is a long way from becoming law—it is being considered concurrently by the Subcommittee on Digital Commerce and Consumer Protection of the House Energy and Commerce Committee and the Senate Committee on Commerce, Science, and Transportation Committee—it would be the first U.S. legislation to focus on forming a comprehensive plan to promote, govern, and regulate artificial intelligence ("AI").2 Other countries are already actively considering legislation that would address AI3 or have already passed legislation or regulations that address some key aspect of AI.4 The Act would form the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence (the "Committee") to help inform the federal government's response to the AI sector. The three primary components of the act—the definition of AI, the formation and composition of the Committee, and the Committee's functions—are solid first efforts toward the regulation of AI, although the Act clearly anticipates further legislative action.

Definition of AI

The Act provides a very broad definition of artificial intelligence:

  1. Any artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance. Such systems may be developed in computer software, physical hardware, or other contexts not yet contemplated. They may solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.
  2. Systems that think like humans, such as cognitive architectures and neural networks.
  3. Systems that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning.
  4. A set of techniques, including machine learning, that seek to approximate some cognitive task.
  5. Systems that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision-making, and acting.5

That definition could potentially encompass any robot, program, autonomous or automated device that could be viewed, interpreted or marketed as artificial intelligence. The Act would even provide a legal distinction between "Narrow Artificial Intelligence" and "Artificial General Intelligence," which is nice but unnecessary since, as the Act acknowledges, artificial general intelligence is a "notional future artificial intelligence system," i.e., not happening any time soon.6

But broadly defining AI is a smart approach at this point, given how quickly the technology is developing and the lack of a widespread consensus on how to define AI.7 And by creating an inclusive, almost nebulous, working definition, the Act avoids the historical problem of a shifting standard for technology to qualify as AI. For example, chess was once considered a barometer of AI, but that has gradually changed since computers were able to play a decent game of chess in 1960.8 IBM's Deep Blue beat the best human player in the world in 1997.9 These developments made many suggest that skill in chess is not actually indicative of intelligence,10 but did chess really become disconnected from intelligence merely because a computer became good at it? As one expert laments, "[a]s soon as it works, no one calls it AI anymore."11

However, as the AI field starts to better define itself, the Act's definition will need to be revised, most likely by more closely defining what the law will consider AI and what it won't. It is entirely likely that a program or device that can "perform tasks under varying and unpredictable circumstances, without significant human oversight," will be considered too general, as a good lawyer could make a Magic 8 Ball fit into that definition. Fortunately, the Act acknowledges the rapid development of AI technology and empowers the Committee to revise the definitions in the Act as it considers appropriate.12 That is a foresighted provision that begins to address the real danger of the Committee not being nimble enough to keep up with how quickly AI develops. Almost all AI legislation will need to overcome that issue, and it is a good idea to have the bill that creates the eventual Artificial Intelligence Regulatory Agency ("AIRA") grant that entity the power to revise the definitions and key terms in its organic statute.13

Formation and Composition of the Committee

The Act establishes the Committee in the Department of Commerce, however, it would be interesting to know how the Act's sponsors chose that department. It is a defensible position, but there are others that are arguably more appropriate, including the Department of Justice (which would be better equipped to opine on how AI interacts with criminal justice, civil rights, and court rooms),14 the Department of Labor (which might be better equipped to address concerns about widespread job losses caused by AI),15 and the Department of Defense (which oversees the Defense Advanced Research Projects Agency, an agency that funds and coordinates groundbreaking research that is functionally similar to one of the Act's primary purposes).16 The Committee's ultimate federal home might change as the Act undergoes amendments and revisions, and, when the federal government expands its efforts to regulate AI, an entirely different department may be more appropriate for AIRA.17

The Committee will have two classes of members: (1) 19 voting members drawn from research and academia, private industry, civil society, and labor organizations; and (2) non-voting members from various federal departments and agencies, including the Departments of Education, Justice, Labor, and Transportation, the Federal Trade Commission, the National Institute of Standards and Technology, the National Science Foundation, and the National Science and Technology Council.18 The Act reserves the right for the Committee to invite further non-voting members.19 It would be preferable to revise the composition of the Committee to specifically include voting members who are programmers or from the legal community,20 as experts from those fields are in a strong position to offer insight into the legislation needed to properly regulate AI, and identifying that legislation will be one of the primary focuses of the Committee. When Congress creates AIRA, it should consider similar professions when developing the membership.

The Committee's Functions

The Act charges the Committee with studying and providing advice to the Commerce Secretary on a wide range of topics concerning AI, including:

  • American ability to innovate;
  • how AI will affect the workforce;
  • incorporating ethics into AI;
  • how the federal government can promote the development of AI;
  • data privacy;
  • AI outpacing existing legal protections;
  • accountability and legal rights impacted by AI;
  • eliminating bias in AI;
  • how AI can enhance rural opportunities; and
  • governmental efficiencies created by AI.21

Ultimately, the Committee will prepare a report for the Commerce Secretary and Congress with recommendations for administrative and legislative action to address those topics.22 So inasmuch as the Act and Committee indicate that Congress wants to form a master plan to govern AI, the Committee is limited to informing and recommending. The Act will not directly result in AI regulation.

In this regard, the Committee is similar to the Subcommittee on Machine Learning and Artificial Intelligence, which was established in May 2016 under the Committee on Technology of the National Science and Technology Council. Per its charter, that subcommittee was organized:

to monitor the state of the art in machine learning and artificial intelligence (within the Federal Government, in the private sector, and internationally) . . . to coordinate the use of and foster the sharing of knowledge and best practices about machine learning and artificial intelligence by the Federal Government, and to consult in the development of Federal research and development priorities in machine learning and artificial intelligence.23

Like the Committee, the subcommittee is charged with advising members of the executive branch regarding federal efforts to promote AI as well as regarding the development of the AI sector. That is, neither entity is going to generate the regulations needed to effectively govern AI.

However, the Committee's functions are much broader than the subcommittee. In particular, the authority under the Act to recommend new legislation will hopefully lead to an organic statute establishing AIRA, which could then create a comprehensive regulatory framework to govern AI. The Act is a useful first step toward developing a unified federal approach to AI, and if enacted it would almost certainly improve Washington's promotion, development, and funding of AI research. But ultimately, the federal government needs to do more than just promote AI. It needs to regulate it. With any luck, the Act will either be amended in committee to do that or will lead to the next legislation that does it.

Footnotes

1. Office of Senator Maria Cantwell, Cantwell, Bipartisan Colleagues Introduce Bill to Further Understand and Promote Development of Artificial Intelligence, Drive Economic Opportunity (Press Release), December 12, 2017, available at https://www.cantwell.senate.gov/news/press-releases/cantwell-bipartisan-colleagues-introduce-bill-to-further-understand-and-promote-development-of-artificial-intelligence-drive-economic-opportunity.

2. Although this is the first legislation, in 2016 the Obama Administration released three studies that collectively addressed how Washington should govern AI by trying to outline what a comprehensive approach to AI regulation could look like.

3. See Ott Ummelas, "Estonia plans to give robots legal recognition," Independent, October 14, 2017, available at https://www.independent.co.uk/news/business/news/estonia-robots-artificial-intelligence-ai-legal-recognition-law-disputes-government-plan-a7992071.html.

4. See Articles 13 & 22, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1 (providing the consumers the right to "meaningful information about the logic involved" in decision made by AI).

5. S. 2217, 115th Congress, §3(a)(1) (the "Act").

6. Id., at §§ 3(a)(2) & (3).

7. John Pavlus, "Stop Pretending You Really Know What AI Is and Read This Instead," Quartz, September 6, 2017, https://qz.com/1067123/stop-pretending-you-really-know-what-ai-is-and-read-this-instead/.

8. Nils J. Nilson, The Quest for Artificial Intelligence (Cambridge University Press, 2009), 194.

9. Bruce Pandolfini, Kasparov and Deep Blue: The Historic Chess Match Between Man and Machine (Simon & Schuster, 1997), 7-8.

10. Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 HARV. J. OF L. & TECH. 2 (Spring 2016), 361.

11. Moshe Y. Vardi, "Artificial Intelligence: Past and Future," Communications of the Association for Computing Machinery (Jan. 2012), 5 at 5.

12. The Act, supra note 5, at §3(b).

13. See John Frank Weaver, Robots Are People Too (Praeger Publishing, 2013), 184-85.

14. National Science and Technology Council, "Preparing for the Future of Artificial Intelligence," Executive Office of the President (October 2016), 14, https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf; Chris Johnston, "Artificial intelligence 'judge' development by UCL computer scientists," The Guardian, October 23, 2016, https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists.

15. Some estimates put job losses as high as 50 million professional jobs lost to AI, representing 40 percent of the workforce, including lawyers, doctors, writers and scientists. Gus Lubin, "Artificial Intelligence Took America's Jobs And It's Going to Take A Lot More," Business Insider, November 6, 2011, http://www.businessinsider.com/economist-luddites-robots-unemployment-2011-11.

16. See About DARPA, Department of Defense, Defense Advanced Research Projects Agency, https://www.darpa.mil/about-us/about-darpa, accessed on Jan. 15, 2018.

17. For a more thorough discussion of the federal departments where an AI-regulatory agency could be created, see John Frank Weaver, "Regulating Artificial Intelligence," in W. Barfield & U. Pagallo, eds. Research Handbook of Artificial Intelligence and Law (Edward Elgar: Forthcoming 2018).

18. The Act, supra note 5, at §§ 4(a) & (c).

19. Id., at § 4(c)(2)(I).

20. Of course, I am biased.

21. The Act, supra note 5, at §§ 4(b)(1) & (2).

22. Id. at § 4(b)(3).

23. Charter of the Subcommittee on Machine Learning and Artificial Intelligence, Committee on Technology, National Science and Technology Council, May 5, 2016, available at https://www.whitehouse.gov/sites/whitehouse.gov/files/ostp/MLAI_Charter.pdf.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.