AI Regulation Part 1: What You Need To Know To Stay Ahead Of The Curve

AP
Arnold & Porter

Contributor

Arnold & Porter is a firm of more than 1,000 lawyers, providing sophisticated litigation and transactional capabilities, renowned regulatory experience and market-leading multidisciplinary practices in the life sciences and financial services industries. Our global reach, experience and deep knowledge allow us to work across geographic, cultural, technological and ideological borders.
Artificial intelligence is all around us. AI powers Alexa, Google Assistant, Siri, and other digital assistants. AI makes sense of our natural language searches to deliver, we hope ...
United States Technology

Artificial intelligence is all around us. AI powers Alexa, Google Assistant, Siri, and other digital assistants. AI makes sense of our natural language searches to deliver, we hope, the optimal results. When we chat with a company representative on a website, we often are chatting with AI, at least initially. AI has defeated the human world champions of chess and Go. AI is advancing diagnostic medicine, driving cars, and evaluating all types of risks.

As AI becomes more common, more powerful, and more influential in our societies and our economies, governments are noticing. When Google CEO Sundar Pichai publicly proclaims "there is no question in my mind that artificial intelligence needs to be regulated," the questions are when and how-not whether-this will happen.

Indeed, certain aspects of AI already are regulated, and the pace of regulatory developments is accelerating. This pair of articles tells you what you need to know-and what steps your company can take-to keep ahead of this curve.

I. What Is AI?

Before discussing the regulation of AI, let's review what AI is and how the leading type works.

Experts broadly conceive of two versions of AI:

1) narrow, meaning it can perform one particular function; and

2) general, meaning it can perform any task and adapt to any situation.

These risks require attention, however. Working from probabilities, not certainty, your company's AI will make mistakes. A big one will attract unwelcome scrutiny from regulators, the media, and the plaintiffs' bar.

All existing AI is narrow. General AI (sometimes known as "artificial general intelligence" or AGI) can perform any task and adapt to any situation. AGI would be as flexible as human intelligence and, theoretically, could improve itself until it far surpasses our capabilities.

For now, AGI remains in the realm of science fiction, and authorities disagree whether AGI is even possible. While serious people do ponder regulating AGI-in case someone creates it-current regulatory initiatives focus on narrow AI.

Machine Learning

Machine learning has enabled the recent explosion of AI applications. As one group explains, "Machine learning systems learn from past data by identifying patterns and correlations within it."

Whereas traditional software (and some other types of AI) run particular inputs through a preprogrammed model or a set of rules and reach a defined result - akin to 2 + 2 = 4a machine learning system builds its own model from the data it is trained upon. The system then can apply the model to make predictions about new data.

According to trade association CompTIA, algorithms are "now probabilistic.... In other words, we are asking computers to make a guess."

For example, in a technology-assisted document review, lawyers will code a small sample of the document collection as responsive or not. The system will identify patterns and correlations distinguishing the sample documents that were coded "responsive" from those coded "not responsive." It then can predict whether any new document is responsive and measure the model's confidence in its prediction.

For validation, the lawyers will review the predictions for another sample of documents, and the system will refine its model with the lawyers' corrections. The process will iterate until the lawyers are satisfied with the model's accuracy. At that point, the lawyers can use the system to code the entire document collection for responsiveness with whatever human quality control they desire.

The quality of the training data set matters greatly. The machine learning system assumes the accuracy of the training data.

In the document review example, if the lawyers incorrectly code every email written by a salesperson as responsive, they will bias the model towards predicting that every sales team email is responsive.

The biased higher probability is not a certainty, however. Other things about an email might overcome the bias. For instance, the lawyers may have coded every email about medical appointments as nonresponsive. As a result, the model still might predict a salesperson's email about a medical appointment is nonresponsive.

To see the full article click here

Originally published by Bloomberg Law

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More