Exploring The Issues And Implications Of Artificial Intelligence

HK
Holland & Knight

Contributor

Holland & Knight is a global law firm with nearly 2,000 lawyers in offices throughout the world. Our attorneys provide representation in litigation, business, real estate, healthcare and governmental law. Interdisciplinary practice groups and industry-based teams provide clients with access to attorneys throughout the firm, regardless of location.
Holland & Knight and Florida State University College of Law hosted an Artificial Intelligence (AI) Summit in Holland & Knight's Washington, D.C., office, on April 2...
United States Florida Technology

Highlights

  • Holland & Knight recently teamed with Florida State University College of Law to host an Artificial Intelligence (AI) Summit at the firm's Washington, D.C., office.
  • Using panel-style discussions, the summit provided an overview of AI, legal and regulatory issues surrounding organizations and people who use it, and overall implications of the technology.
  • Panelists included current and former representatives from Florida State University College of Law, the U.S. Equal Employment Opportunity Commission, Federal Trade Commission and The New York Times, among other organizations.

Holland & Knight and Florida State University College of Law hosted an Artificial Intelligence (AI) Summit in Holland & Knight's Washington, D.C., office, on April 2, 2024. Holland & Knight Partner Anthony DiResta and Florida State University College of Law Dean and Donald J. Weidner Chair Erin O'Hara O'Connor hosted the event. Mr. DiResta, who co-chairs the firm's Consumer Protection Defense and Compliance Team, previewed three questions that the Summit would grapple with: 1) What is AI, 2) what ethical considerations must be taken into account when regulating AI, and 3) what impacts will AI have on the future? Though Dean O'Connor emphasized that AI will drastically change the ways in which the legal industry operates, she urged the audience not to be fearful of these changes, but rather to meet them head on with a true understanding of the technology.

The Summit consisted of three panels and a keynote address by Professor Anu Bradford, a leading scholar on the European Union's (EU) regulatory power and sought-after commentator on digital regulation, among other subjects. Professor Bradford is the author of the recently published book Digital Empires: The Global Battle to Regulate Technology (Oxford University Press, 2023), a lauded work that addresses themes at the heart of all of the panels for this event.

Panel 1: Overview of AI

This panel was moderated by Holland & Knight Partner Da'Morus Cohen, a member of the firm's Consumer Protection Defense and Compliance Team, and laid the foundation for the forum by providing a broad overview of AI, its current capabilities and limitations, and the challenges and opportunities it presents. Mr. Cohen was joined by Shawn Bayern, the Larry and Joyce Beltz Professor of Torts and Associate Dean for Technology at Florida State University College of Law; Baker Botts Partner Richard Harper; and U.S. Equal Opportunity Employment Commission (EEOC) Commissioner Keith E. Sonderling.

Mr. Cohen started the discussion by asking the panelists, "What is AI?" Professor Bayern explained that the best single definition is that it is "a sophisticated way of using computers to detect patterns." Similarly, Mr. Harper answered that it is a machine performing tasks "typically associated with human thinking."

Commissioner Sonderling approached the question from his EEOC background and the ways in which AI affects employment decisions, including machine learning. Commissioner Sonderling discussed the growth of AI from its traditional use assisting in employment decisions to generative AI, which creates work product, and now the era of AI to monitor productivity and oversee workforces. Commissioner Sonderling discussed the challenges that AI poses to employment listings, explaining that AI can create the "perfect job description," but that there is a lack of transparency as to where the qualifications generated by AI originated. It is the EEOC's position that every line on a job description must be necessary. Employers must ask themselves, Is this qualification necessary, or does it result in a disparate impact? Commissioner Sonderling emphasized that employers are liable for the decisions that AI "makes" and that intent is irrelevant for liability purposes.

The conversation transitioned to ChatGPT and the unique opportunities and challenges it poses. Professor Bayern emphasized that ChatGPT's capabilities do not come from its language model, but rather its inputs, which are created by humans. From Professor Bayern's perspective, the problem is not ChatGPT itself but rather the weight of authority that it has been given. Mr. Harper's remarks encapsulated the general sentiment of the panel. To Mr. Harper, the lack of transparency is the main challenge posed by ChatGTP specifically and AI technologies generally.

Mr. Cohen asked the panelists if there are any common misconceptions as it pertains to AI. Mr. Harper answered that a common misconception of AI is that it is "conscious." Mr. Harper explained that AI is not self-sustained; instead, it requires human engagement. He emphasized that humans must engage with AI and not just "let it run." Commissioner Sonderling made a similar point from the employment perspective. He explained that employers cannot control the AI system's design but that they can control how it is used, and the commissioner underscored that companies are responsible for governance. Professor Bayern reiterated the sentiments of the other panelists and emphasized that AI is not something to be afraid of but rather a helpful tool that can solve simple problems with ease.

To conclude the discussion, Mr. Cohen asked the panel in what ways AI is used that the public may not be aware. Professor Bayern explained that there are different types of AI, including private AI, and that AI can no longer be separated from tech. Mr. Harper answered that tech companies have a wide variety of focuses, including medical, and that AI has started to be used across spaces. Finally, Commission Sonderling gave the example of a prominent video conferencing platform that, when you join a meeting and do not know what is happening in the background, there can be facial, voice or resume screening AI of which you are unaware.

Panel 2: Legal and Regulatory Issues of AI

This panel was moderated by Mr. DiResta and followed the legal and regulatory issues that emerged in the first panel. The panel focused on AI's potential to touch virtually every aspect of our economy and society and the legal and regulatory responses. The panel comprised David Vladeck, Professor and Faculty Director for the Center on Privacy & Technology at Georgetown University Law Center and Former Director of the Federal Trade Commission's (FTC) Bureau of Consumer Protection; FTC Senior Attorney Michael Atleson; and Holland & Knight Partner Kwamina Williford, co-chair of the firm's Consumer Protection Defense and Compliance Team.

Mr. DiResta began the conversation by asking panelists for their observations on AI's legal and regulatory landscape. Professor Vladeck started by discussing the divide between those who think AI is a utopia and those who think it is a dystopia. It is Mr. Vladeck's opinion that AI is more dystopian, and his concern is that the government is not ready for the challenges AI poses and that until there are real laws in place, "we are in trouble." He gave the example of fake legal briefs produced by AI and the lack of guardrails in place to prevent their dissemination. In contrast, Mr. Atleson answered that the FTC views AI through the lens of its mission and not through the lens of technology. He explained that the Commission will enforce AI under the FTC Act if it results in unsubstantiated claims, deception or fraud, consumer manipulation or deceptive collection. Ms. Williford's remarks struck a middle ground. She explained that AI does not operate by itself, rather some human is the actor and, as such, law regulates actions and impacts. Ms. Williford also discussed the importance of FTC "use" cases, particularly the case in which a national pharmacy chain was banned from using AI facial recognition for five years, as a guide to understand the regulatory landscape. Ms. Williford emphasized that companies must monitor AI to help ensure that it produces accurate outputs.

Mr. DiResta next asked the panel whether the FTC is doing enough to meet the challenges posed by AI. Mr. Vladeck responded that while the FTC is doing a great job, it lacks the resources to fully regulate AI. Mr. Vladeck discussed how in his time at the FTC, the Commission focused on enforcement actions and commended today's Commission for using rulemaking to make law. He also urged the FTC to bring pure "unfairness" cases, explaining that there is significant academic literature on unfairness violations but that academia has not helped to define policy.

In a follow-up to Mr. Vladeck's answer, Mr. DiResta asked Mr. Atelson if "deceptive and unfairness" cases are enough or if "abusive" cases must also be brought. Mr. Atleson answered that the FTC currently relies on FTC Act Section 19 and penalty and offense letters, but that it needs a better avenue to bring civil penalties. He agreed that the FTC needs additional resources but was skeptical that a new law to govern AI was necessary. Mr. DiResta then asked Ms. Williford how she advises clients when there is regulatory ambiguity on these issues. Ms. Williford explained that she advises clients on high-level principles that govern AI such as accuracy, fairness, bias, transparency, control over how AI works, accountability, privacy and consent. However, she said that attorneys need regulatory guidance on how AI interacts with existing laws and what additional guardrails apply to AI to advise clients on their duty and exposure. Mr. DiResta, picking up on Ms. Williford's theme of what duties are owed, noted that some guidance is found in consent orders but asked Ms. Williford what else is needed. She responded that there is no straightforward answer but that consent orders, industry discussions, staying in touch with colleagues on Capitol Hill and constantly surveying what is going on in the industry help her provide the best, most up-to-date advice to clients.

The conversation turned to what counselors or companies can do to be compliant. Mr. Atleson pointed to numerous resources that provide guidance, including FTC blog posts, the FTC Biometric Policy Statement, actions taken by other federal agencies, the National Institute of Standards and Technology, and the U.S. Department of the Treasury report on Cybersecurity and Risk Management for Financial Services. Ms. Williford said that for counselors and companies to be compliant, they need baselines and standards that create clear guardrails, which can come from use cases. She explained that companies often want to be compliant but lack clear guidance on what is permissible. In contrast to Mr. Atleson and Ms. Williford, Mr. Vladeck contended that consent decrees provide the necessary guidance required to achieve compliance.

To wrap up the discussion, Mr. DiResta asked if the FTC uses AI to assist in case decisions. Mr. Vladeck answered that any tool that can help gather and parse information is helpful but that any use beyond that would require the explicit consent of the agency's commissioners. Mr. Atleson explained that he does not personally use AI in deciding whether to move forward with a case but that some FTC attorneys use AI to comb through discovery productions. Mr. Atleson caveated this statement by noting that AI is not used by the FTC in any consequential way and that he would be concerned if it was, given its issues with reliability and bias.

Panel 3: Implications of AI

Holland & Knight Partner Brian Goodrich moderated this panel, which focused on the impacts of AI that are already being felt, as well as what the future may hold as AI continues to affect businesses, governments and individual citizens in the years to come. Mr. Goodrich, also a member of the firm's Consumer Protection Defense and Compliance Team, was joined by Kashmir Hill, a technology reporter for The New York Times and author of Your Face Belongs to Us; Michael Frank, Senior Fellow at the Wadhwani Center for AI and Advanced Technologies, Center for Strategic and International Studies; and Hodan Omaar, Senior Policy Analyst at the Center for Data Innovation, Information Technology & Innovation Foundation (ITIF).

Mr. Goodrich started the conversation by noting that the prior panels discussed regulations, but what about consequences? He asked the group for its greatest hope and fear when it comes to AI. Mr. Frank described himself as an optimist, explaining that AI as a tool that is capable of replicating many tasks and creates value for society. Ms. Hill, on the other hand, has privacy-based fears of AI. She worries that AI identifies, rates and determines our opportunities without us knowing or controlling its actions. Ms. Hill gave the example of a Google-created AI meant to detect inappropriate images of children. The AI ended up also capturing images of children that were provided to doctors. The judgment was wrong because AI did not understand the context. Ms. Omaar's sense of AI struck a middle ground. She explained that to her as consumer, AI is great but that she does not have faith that AI will adapt to climate change or other causes in the public's interest. Mr. Goodrich picked up on Ms. Omaar's discussion of "good causes" and asked how industry and government can move into these good causes. Ms. Omaar is working on a report that tackles this question. She explained that to address causes that are in the public's interest, everything from risks, society, economics and democracy are at play. Ms. Omaar said that one way to benefit the public is to squarely address different types of privacy harms with privacy statutes.

Mr. Goodrich next asked the panel how reporters can push AI in the right direction. Ms. Hill responded that journalists' current focus is to inform the public of AI and to stay on top of developments. Ms. Hill also gave the example of The New York Times suing OpenAI as a way reporters have moved AI in the right direction. Along this vein, Mr. Goodrich posed the question of how to mitigate against the catastrophic consequences at the heart of the suit with The New York Times. Ms. Omaar believes that you mitigate risk by focusing on innovation because better technology requires less data. Mr. Frank said that to prevent catastrophic consequences, the right person needs to be in charge of the system and everyone needs to be represented in the process. He also noted that companies that use AI in a consumer-facing way need to be more careful and take more time to fully understand its implications. Ms. Hill's remarks complemented those of the other panelists. She discussed how the National Institute of Standards and Technology has tested automated facial recognition for more than 20 years, yet there are still concerns about how it works and how it is deployed. A lot comes down to ensuring that the right people are in charge of the system and everyone is represented in the process.

Building on the theme of innovation and pushing AI in the right direction, Mr. Goodrich focused the discussion on whether AI can ease the tensions around disparate impact or only make it worse. Ms. Omaar gave the example of the AI system a prominent technological institution built for a local school system. On paper, the AI system worked great because it came up with better school schedule times. However, in reality the schedule did not work for families because siblings were starting and ending school at different times. Ms. Omaar said that if the public is not in the know, it is hard to get citizen buy-in. Mr. Frank explained that AI can make things both better and worse. He gave the examples of India and Europe – in Mr. Frank's view, India's embrace of AI will give it a tremendous boost, while the EU AI Act is a mistake. Ms. Hill similarly understands AI as doing both, saying that it is not a monolith. Ms. Hill explained that AI hiring tools can lead to a broader scope of candidates being interviewed, which eases the tensions of disparate impact. At the same time, low-income areas are under more surveillance, which makes the tensions of disparate impact worse.

Mr. Goodrich continued the discussion and asked the panel if there are any ways that average consumers can protect themselves or if it is up to government and regulators. Ms. Hill believes that there is little ability for consumers to protect themselves, especially given the lack of transparency into how AI works. Ms. Omaar echoed Ms. Hill's comments. She explained that consumers do not know what they do not know and thus are not in a position to protect themselves. In contrast, Mr. Frank is overall optimistic that systems will improve and that processes will develop. However, he explained that to truly protect consumers, there needs to be a national privacy law. Mr. Goodrich wrapped up the discussion by following up on the theme of privacy. He asked: What is the greatest threat that AI poses to privacy? Ms. Hill responded that the only mechanism that prevents a world in which all consumers are readily identified by everyone and anyone is a regulatory environment. There needs to be more guardrails to protect privacy.

How We Can Help

Holland & Knight's Consumer Protection Defense and Compliance Team includes a robust AI practice, with experienced attorneys who are recognized thought leaders in consumer protection and competition issues, covering all industries and topics.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More