On April 25, leaders of key U.S. federal enforcement agencies jointly stressed their intent to crack down against discrimination arising out of the use of artificial intelligence (AI) and other automated systems. Joining the statement were the Director of the Consumer Financial Protection Bureau, the Assistant Attorney General for the Justice Department's Civil Rights Division, the Chair of the Equal Employment Opportunity Commission, and the Chair of the Federal Trade Commission.

There's been a lot of discussion recently about proposals for new statutes or regulations to govern AI — much of which we have covered in prior pieces. But this discussion shouldn't lull companies into thinking that AI is the unregulated "Wild West." The agency heads emphasized their existing authority to prevent unlawful discrimination by AI and other algorithms using laws already on the books.

A lot of factors can lead algorithms to produce results that may be biased against certain groups, including protected classes. Making business decisions — such as screening job candidates, hiring, firing, promotions, and credit offers — based on those results could land a company in legal hot water from both government enforcers and private plaintiffs, as could selling biased AI systems.

Companies should consider putting in place comprehensive AI risk-management programs now to address these current perils while keeping a close watch on the evolving legal landscape. Waiting for new, AI-specific laws before starting an appropriate risk-management program may well leave a business on the wrong side of a federal investigation.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.