Are machines biased? The FTC has issued Business Guidance about the use of artificial intelligence (AI), warning marketers about the danger of the potential discriminatory impact of automated decision-making. As the FTC noted, the use of AI "presents risks, such as the potential for unfair or discriminatory outcomes or the perpetuation of existing socioeconomic disparities." To illustrate the risk, the FTC's Guidance cites a study of an algorithm used to target medical interventions to the sickest patients that wound up funneling resources to a healthier, white population, to the detriment of sicker, black patients. The Guidance also cites a number of enforcement actions brought by the FTC and other government agencies under the Fair Credit Reporting Act, the Equal Credit Reporting Act, the Fair Housing Act, and more, where companies have used big data in misleading or discriminatory ways in their advertising, targeting practices or in their other interactions with consumers.
To mitigate these risks, the FTC's Guidance recommends that companies using artificial intelligence tools be "transparent, explainable, fair, and empirically sound, while fostering accountability."
That means not misleading consumers about the use of AI and affirmatively telling consumers about how algorithms are being used, particularly if they're used for important decisions, like issuing credit. It also means looking hard at the algorithm – and the outcomes it produces -- to ensure that it doesn't discriminate, even if it appears to be neutral. Are the data and models on which the algorithm is based accurate, robust and "empirically sound." As the Guidance notes, it's important for companies to validate – and revalidate – their data and models to ensure that they are powering the company's AI tools appropriately and fairly.
Finally, the Guidance urges companies to hold themselves accountable for "compliance, ethics, fairness and nondiscrimination." What does accountability look like to the FTC? It means, first, considering some basic questions about the company's use of AI: How representative is the data set? Does the data model account for biases? How accurate are the company's predictions based on big data? And does the company's reliance on big data raise ethical or fairness concerns? It also means considering whether third parties should be engaged to test the AI tools used by a company to ensure that their use is not producing unexpected discriminatory outcomes.
This Guidance, and the FTC's reasons for issuing it, are very compelling. As the FTC notes, a practice is unfair under the FTC Act if it causes more harm than good. The fact that FTC is seeing harm if consumers are subjected to digital redlining on the basis of race, sex or other protected class should be understood by marketers for its real significance: the FTC sees this practice as a consumer protection issue and a potential violation of the FTC Act. It means that more enforcement actions may well follow from this Guidance.
It's time for Equity by Design.
This alert provides general coverage of its subject area. We provide it with the understanding that Frankfurt Kurnit Klein & Selz is not engaged herein in rendering legal advice, and shall not be liable for any damages resulting from any error, inaccuracy, or omission. Our attorneys practice law only in jurisdictions in which they are properly authorized to do so. We do not seek to represent clients in other jurisdictions.