David Sofge is an attorney based in Holland & Knight's Fort Lauderdale office
Draft Ethics Guidelines for Trustworthy AI were floated in mid-December 2018 by the European Commission's "High-Level Expert Group on Artificial Intelligence" (AI HLEG). The document lays out the beginnings of a regulatory framework for "human-centric" and emphatically human-controlled artificial intelligence (AI). As with the European Union's (EU) General Data Protection Regulation (GDPR), the idea is to establish a "third way" to regulate the impact of new technology on citizens of EU member states, differing both from China's more prescriptive approach and from the free-market model in the United States.
Europe cannot at present match the sheer technological advances of the U.S., China and Japan, but the guidelines anticipate that establishing an economic zone based on superior ethical standards that inspire public trust and confidence in AI may alter the global debate and point the way to a new de facto international standard. If this appears overly ambitious, then the example of the GDPR is invoked. U.S. companies can attest to the substantial costs and strenuous efforts that have gone into bringing their operations into compliance with GDPR, which has created ongoing regulatory risk and an ever-present danger of reputational damage.
The links between the GDPR and the draft AI guidelines are already explicit: The Article 29 Working Group under the GDPR has expressed the view that Article 22(1) of the regulation establishes a general prohibition against fully automated individual decision-making, and the guidelines cite the GDPR as a parallel manifestation of the EU's commitment to continued human dominion over advanced technology.
A central concept of the guidelines draft is Trustworthy AI ("our north star"), which encompasses both ethical purpose and technical standards. The draft moves impressively from high-level discussion of European concepts of fundamental human rights, proceeds to guidance on implementation and concludes with lists of specific technical suggestions. A final version due in March 2019 is to include specific use cases for healthcare, autonomous vehicles, insurance and profiling by law enforcement.
Under the AI HLEG's mandate, these guidelines will have no binding legal effect at inception and will instead be voluntary. Despite this humble entrance, U.S. firms with recent hard experience of the GDPR may find it advisable to keep a close eye on the budding career of "Trustworthy AI made in Europe."
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.