AI Regulation: What You Need To Know To Stay Ahead Of The Curve

AP
Arnold & Porter

Contributor

Arnold & Porter is a firm of more than 1,000 lawyers, providing sophisticated litigation and transactional capabilities, renowned regulatory experience and market-leading multidisciplinary practices in the life sciences and financial services industries. Our global reach, experience and deep knowledge allow us to work across geographic, cultural, technological and ideological borders.
Artificial intelligence (AI) is all around us.
United States Technology

By Peter J. Schildkraut1

Artificial intelligence (AI) is all around us. AI powers Alexa, Google Assistant, Siri, and other digital assistants. AI makes sense of our natural language searches to deliver (we hope) the optimal results. When we chat with a company representative on a website, we often are chatting with an AI system (at least at first). AI has defeated the (human) world champions of chess and Go.22AI is advancing diagnostic medicine, driving cars and making all types of risk assessments. AI even enables the predictive coding that has made document review more efficient. Yet, if you're like one chief legal officer I know, AI remains on your list of things you need to learn about.

Now is the time! Right or wrong, there are growing calls for more government oversight of technology. As AI becomes more common, more powerful, and more influential in our societies and our economies, it is catching the attention of legislators and regulators. When a prominent tech CEO like Google's Sundar Pichai publicly proclaims "there is no question in my mind that artificial intelligence needs to be regulated," the questions are when and how-not whether-AI will be regulated.3

Indeed, certain aspects of AI already are regulated, and the pace of regulatory developments is accelerating. What do you need to know-and what steps can your company take-to stay ahead of this curve?

What Is AI?

Before plunging into the present and future of AI regulation, let's review what AI is and how the leading type works. There are many different definitions of AI, but experts broadly conceive of two versions, narrow (or weak) and general (or strong). All existing AI is narrow, meaning that it can perform one particular function. General AI (also termed "artificial general intelligence" (AGI)) can perform any task and adapt to any situation. AGI would be as flexible as human intelligence and, theoretically, could improve itself until it far surpasses human capabilities. For now, AGI remains in the realm of science fiction, and authorities disagree on whether AGI is even possible. While serious people and organizations do ponder how to regulate AGI4 -in case someone creates it-current regulatory initiatives focus on narrow AI.

Machine Learning

One type of AI, machine learning, has enabled the recent explosion of AI applications. "Machine learning systems learn from past data by identifying patterns and correlations within it."5 Whereas traditional software, and some other types of AI, run particular inputs through a preprogrammed model or a set of rules and reach a defined result (akin to 2+2=4), a machine learning system builds its own model (the patterns and correlations) from the data it is trained upon. The system then can apply the model to make predictions about new data. Algorithms are "now probabilistic. We are not asking computers to produce a defined result every time, but to produce an undefined result based on general rules. In other words, we are asking computers to make a guess."6

To take an example from legal practice, in a technologyassisted document review, lawyers will code a small sample of the document collection as responsive or not responsive. The machine learning system will identify patterns and correlations distinguishing the sample documents that were coded "responsive" from those coded "not responsive." It then can predict whether any new document is responsive and measure the model's confidence in its prediction. For validation, the lawyers will review the predictions for another sample of documents, and the system will refine its model with the lawyers' corrections. The process will iterate until the lawyers are satisfied with the model's accuracy. At that point, the lawyers can use the system to code the entire document collection for responsiveness with whatever human quality control they desire.

The quality of the training data set matters greatly. The machine learning system assumes the accuracy of what it is told about the training data. In the document review example, if, when the lawyers train the system, they incorrectly code every email written by a salesperson as responsive, they will bias the model towards predicting that every sales team email in the collection is responsive. Note that I did not say they will train the model to identify every sales team email as responsive. The miscoded training data will increase the probability that the model will predict any given sales team email is responsive, but they will not make this a certainty. Other things about an email might overcome the bias. For instance, the lawyers may have coded every email about medical appointments as nonresponsive. As a result, the model might nevertheless predict that an email about a medical appointment is nonresponsive even if it comes from a salesperson.

Why Regulate AI?

Several characteristics of AI drive the calls for regulation.

Accuracy and Bias

AI predictions are sometimes inaccurate, which can injure both individuals and society. Poorly performing AI might underestimate a person's fitness for a job or creditworthiness. AI could crash a car by misperceiving environmental conditions or misjudging what another vehicle will do. In short, AI could harm individuals in all the ways that humans and their creations already do (and probably some novel ways too).

Policymakers may leave redress to the courts,7 but there may be gaps in existing law that legislators decide to fill with new causes of action. Governments also may turn to regulation as a prophylactic to supplement the tort system, as many countries have done in areas such as food and consumer product safety.

The pressure may be even greater to regulate AI applications that might cause societal harm. For instance, AI can discriminate against members of historically disadvantaged groups. Imagine a human resources AI application trained to identify the best job candidates by finding those most similar to previous hires. Free from the implicit biases we all carry, it should be completely objective in selecting the best candidates for that company, right? But imagine further that the applicant pool whose resumes comprised the training set was predominantly male. Actually, you don't have to imagine. Using this method, Amazon's Edinburgh office developed a machine learning system for hiring decisions that "effectively taught itself that male candidates were preferable."8

Facial recognition technology also raises socialjustice concerns. A study of 189 facial recognition systems found minorities were falsely named much more frequently than whites and women more often than men.9 Privacy questions aside, using facial recognition to identify criminal suspects makes these racial differences particularly troubling because falsely identified individuals may be surveilled, searched or even arrested.10

Technology's promise is to help us escape the biases all people have. Instead, however, these examples show how AI can wind up reinforcing our collective biases. What is going wrong? First, like all of us, algorithm creators have cultural blind spots, which can cause them to miss opportunities to correct their algorithms' disparate impacts. Second, AI forms its predictions from data sets programmers or users provide. As a result, AI predictions are only as good as the data the AI is trained upon. Training data, in turn, reflect the society from which they are collected, biases included. To prevent societal biases from infecting AI predictions, developers and operators-and their attorneys and other advisors-must recognize them and determine how to adjust the training data.

Even when AI makes accurate and unbiased predictions, however, the results can be troubling. Several years ago, researchers found that Facebook was less likely to display ads for science, technology, engineering, and math jobs to women. Women were interested in the jobs, and the employers were interested in hiring women. But "the workings of the ad market discriminated. Because younger women are valuable as a demographic on Facebook, showing ads to them is more expensive. So, when you place an ad on Facebook, the algorithms naturally place ads where their return per placement is highest. If men and women are equally likely to click on STEM job ads, then it is better to place ads where they are cheap: with men."11

Power

In a 2018 MIT-Harvard class on The Ethics and Governance of Artificial Intelligence, Joi Ito relates being told that "machines will be able to win at any game against humans pretty soon." Ito then observes, "A lot of things are games. Markets are like games. Voting can be like games. War can be like games. So, if you could imagine a tool that could win at any game, who controlled it and how it is controlled has a lot of bearing on where the world goes."12 It is easy to see why the public might demand regulation of this power.

Market Failures

For all its power, though, AI cannot transcend market forces and market failures (even if it might be able to win market "games"). There will be cases when AI performs accurately in a socially desirable arena, yet a market outcome may not be desirable.

Consider self-driving cars. Aside from relieving drivers of the drudgery of daily commutes, freeing time for more pleasant or productive activity, a major selling point for vehicle autonomy is safety. The underlying AI won't get tired or distracted or suffer from other human frailties that cause accidents. But how should an autonomous vehicle be programmed to pass cyclists in the face of oncoming traffic? The vehicle's occupants will be safer if the vehicle travels closer to the cyclist and further from oncoming traffic. The cyclist will be safer if the car moves closer to the oncoming traffic and further from the cyclist. Nobody wants to buy an autonomous vehicle programmed to protect others at its occupants' peril, but everyone wants other people's autonomous vehicles to be programmed to minimize total traffic casualties.13 This is a classic collective action problem in which regulation can improve the market outcome.

How Is AI Regulated?-Application of Familiar Regulatory Regimes

Widespread regulation is coming because AI sometimes produces inaccurate or biased predictions, can have great power when accurate and remains vulnerable to market failures. (I use "regulation" expansively to include statutory constraints enforced through litigation, not just agency-centered processes.) What will this regulation look like?

Some regulation of AI will look very familiar, as AI already is regulated in certain economic sectors and activities.

Generally Applicable Law

"AI did it" is, by and large, not an affirmative defense. If something is unlawful for a human or non-AI technology, it probably is illegal for AI. For instance:

  • Title VII of the US Civil Rights Act of 1964 (as amended) prohibits employment practices with "a disparate impact on the basis of race, color, religion, sex, or national origin" unless "the challenged practice is job related for the position in question and consistent with business necessity."14 There is no carve-out for AI.
  • Likewise, the US Equal Credit Opportunity Act (ECOA),15 which also does not mention AI, "prohibits credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance. If, for example, a company made credit decisions [using AI] based on consumers' Zip Codes, resulting in a 'disparate impact' on particular ethnic groups, ... that practice [could be challenged] under ECOA."16
  • Antidiscrimination regimes under US state law17 and in other countries18 similarly apply to AI.
  • The US Fair Credit Reporting Act (FCRA) requires certain disclosures to potential employees, tenants, borrowers, and others regarding credit or background checks and further disclosures if the report will lead to an adverse action.19 Credit and background checks that rely on AI are just as regulated as those that don't. And, to comply with FCRA (or to defend against an ECOA disparateimpact challenge), a company using AI in covered decisions may need to explain what data the AI model considered and how the AI model used the data to arrive at its output.20
  • The US Securities and Exchange Commission has enforced the Investment Advisers Act of 194021 against so-called "robo-advisers," which offer automated portfolio management services. In twin 2018 proceedings, the SEC found that robo-advisers had made false statements about investment products and published misleading advertising.22 The agency also has warned roboadvisers to consider their compliance with the Investment Company Act of 194023 and Rule 3a-424 under that statute.25
  • There should be little doubt that the US Food & Drug Administration will enforce its good manufacturing practices regulations26 on AIcontrolled pharmaceutical production processes.
  • And, of course, claims about AI applications must not deceive, lest they run afoul of Section 5 of the US Federal Trade Commission Act,27 state consumer protection statutes and similar laws in other countries. Indeed, one of the FTC's commissioners wants to crack down on "marketers of algorithm-based products or services [that] represent that they can use the technology in unsubstantiated ways" under the agency's "deception authority."28

Footnotes

1. Darrel Pae, Katerina Kostaridi and Elliot S. Rosenwald provided research assistance.

2. Relatively unfamiliar to a Western audience, Go is a territorial game that has been played for thousands of years. Although the "rules are quite simple," the "strategic and tactical possibilities of the game are endless," which makes Go an "extraordinary" "intellectual challenge." The International Go Federation, About Go (July 3, 2010), https://www.intergofed.org/about-go/about-go.html.

3. Sundar Pichai, Why Google Thinks We Need to Regulate AI, Fin. Times (Jan. 19, 2020), https://www.ft.com/content/3467659a-386d-11ea-ac3c-f68c10993b04 .

4. See, e.g., Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014); Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (2017); Future of Life Inst., Benefits & Risks of Artificial Intelligence, https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/  (last visited Mar. 3, 2021).

5. The Committee on Standards in Public Life, Artificial Intelligence and Public Standards § 1.1, at 12 (2020), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/868284/Web_Version_AI_and_Public_Standards.PDF  (Artificial Intelligence and Public Standards).

6. CompTIA, Emerging Business Opportunities in AI 2 (May 2019), https://www.comptia.org/content/research/emerging-business-opportunities-in-ai  (Emerging Business Opportunities in AI).

7. See, e.g., Holbrook v. Prodomax Automation Ltd., No. 1:17-cv-219, 2020 WL 6498908 (W.D. Mich. Nov. 5, 2020) (denying summary judgment in suit for wrongful death of manufacturing plant worker killed by robot); Batchelar v. Interactive Brokers, LLC, 422 F. Supp. 3d 502 (D. Conn. 2019) (holding that broker-dealer owed a duty of care in design and use of algorithm for automatically liquidating customer's positions upon determination of margin deficiency in brokerage account); Nilsson v. Gen. Motors LLC, No. 4:18-cv-00471-JSW (N.D. Cal. dismissed June 26, 2018) (settled claim that car in self-driving mode negligently changed lanes, striking motorcyclist).

8. Maya Oppenheim, Amazon Scraps "Sexist AI" Recruitment Tool, Independent (Oct. 11, 2018), https://www.independent.co.uk/life-style/gadgets-and-tech/amazon-ai-sexist-recruitment-tool-algorithm-a8579161.html  .

9. Nat'l Inst. of Standards & Tech., Press Release, NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software (Dec. 19, 2019), https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software ; Patrick Grother et al., Nat'l Inst. of Standards & Tech., Interagency or Internal Report 8280, Face Recognition Vendor Test (FRVT): Part 3: Demographic Effects (2019), https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.

10.See Kashmir Hill, Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match, NY Times (updated Jan. 6, 2021), https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html  (reporting that the three people known to have been falsely arrested due to incorrect facial recognition matches are all Black men).

11. Ajay Agrawal et al., Prediction Machines: The Simple Economics of Artificial Intelligence 196 (2018) (citing Anja Lambrecht and Catherine Tucker, Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads (paper presented at NBER Summer Institute, July 2017)).

12. Joi Ito, Opening Event Part 1, https://www.media.mit.edu/courses/the-ethics-and-governance-of-artificial-intelligence/  (0:09:13-0:09:52).

13.See Jean-François Bonnefon et al., The Social Dilemma of Autonomous Vehicles, 352 Science 1573 (2016); Joi Ito & Jonathan Zittrain, Class 1: Autonomy, System Design, Agency, and Liability, https://www.media.mit.edu/courses/the-ethics-and-governance-of-artificial-intelligence/  (~0:12:15).

14.42 USC § 2000e-2(k)(1)(A)(i).

15.15 USC §§ 1691-1691f.

16.Andrew Smith, Bureau of Consumer Prot., FTC, Using Artificial Intelligence and Algorithms, Business Blog (Apr. 8, 2020, 9:58 AM), https:// www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms  (Smith Business Blog Post). The FTC also might be able to use the unfairness prong of Section 5 of the FTC Act, 15 USC § 45(n), to attack algorithmic discrimination against protected classes. Rebecca Kelly Slaughter, Comm'r, FTC, Remarks at the UCLA School of Law: Algorithms and Economic Justice at 13-14 (Jan. 24, 2020) (Slaughter Speech).

17.See, e.g., NY Dep't of Fin. Servs., Ins. Circular Letter No. 1 (Jan. 18, 2019), https://www.dfs.ny.gov/industry_guidance/circular_letters/cl2019_01   ("[I]nsurers' use of external data sources in underwriting has the strong potential to mask" prohibited bias.). AI Regulation | 12

18.See, e.g., UK Info. Comm'r's Office, Guidance on AI and Data Protection 40, 43-44, 46 (Jul. 30, 2020), https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-ai-and-data-protection-0-0.pdf  (ICO Guidance on AI and Data Protection) (discussing the application of the UK Equality Act 2010, c. 15, to AI systems).

19.15 USC § 1681b.

20.See Smith Business Blog Post; Org. for Econ. Cooperation & Dev., Artificial Intelligence in Society 55 (2019), https://www.oecd.org/publications/artificial-intelligence-in-society-eedfee77-en.htm  .

21.15 USC §§ 80b-1-80b-18c.

22.Wealthfront Advisers, LLC, Investment Advisers Act Release No. 5086, 2018 WL 6722756 (Dec. 21, 2018); Hedgeable Inc., Investment Advisers Act Release No. 5087, 2018 WL 6722757 (Dec. 21, 2018).

23.15 USC §§ 80a-1-80a-64.

24.17 CFR § 270.3a-4.

25.Div. of Inv. Mgmt., SEC, Robo-Advisers, IM Guidance Update No. 2017-02, at 2, https://www.sec.gov/investment/im-guidance-2017-02.pdf.

26.21 CFR pts. 210-212.

27.15 USC § 45; see, e.g., Everalbum Inc., File No. 1923172, 2021 WL 118892 (FTC Jan. 11, 2021) (complaint and proposed consent order requiring company to delete or destroy facial recognition algorithm due to alleged misrepresentations about use of facial recognition on, and retention of, storage service's users' photos and videos).

28.Slaughter Speech at 13.

Click here to view the full report.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More