ARTICLE
22 April 2019

Everything Is Not Terminator: Value-Based Regulation Of Artificial Intelligence

MM
McLane Middleton, Professional Association

Contributor

Founded in 1919, McLane Middleton, Professional Association has been committed to serving their clients, community and colleagues for over 100 years.  They are one of New England’s premier full-service law firms with offices in Woburn and Boston, Massachusetts and Manchester, Concord and Portsmouth, New Hampshire. 
Last fall, Reuters reported that Amazon had developed a hiring tool that used artificial intelligence to review job candidates to make hiring decisions, but that the program discriminated against women
United States Technology

Published in The Journal of Robotics, Artificial Intelligence & Law (May-June 2019)

Last fall, Reuters reported that Amazon had developed a hiring tool that used artificial intelligence to review job candidates to make hiring decisions, but that the program discriminated against women.  Although Amazon ultimately abandoned the AI application as a mechanism to autonomously hire staff, that program represented one of the worst-case scenarios for artificial intelligence: inherent bias or discriminatory preferences baked into the AI that tainted all of the decisions and analysis performed by the AI. This problem is not occurring infrequently. A 2016 analysis of an AI risk assessment software used to determine the probability that a criminal defendant will re-offend revealed that the software disproportionately identified white offenders as a lower risk than black offenders even though their criminal histories displayed higher probabilities to re-offend. Similarly, researchers have expressed concern that AI used to review loan applications will impermissibly rely on race by drawing connections between geographic information (which is relevant to the lender's decision) and the ethnic background of the people known to live there (which is not). Compounding the potential for discriminatory action is the "black box" problem: companies that develop AI programs are typically reluctant to let consumers and regulators review their code, resulting in an algorithmic black box in which decisions are made, but no one knows how or why.

To read the full article, please click here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More