Is AI Putting Your Closely Held Business At Risk?

HC
Hopkins & Carley

Contributor

Hopkins & Carley is a premier Silicon Valley law firm with offices in Redwood City and San Jose. Meeting the legal needs of entrepreneurs, high-net-worth individuals, business owners, and midsize and public companies in a variety of industries, Hopkins & Carley advises clients in business litigation, intellectual property, real estate, employment, estate planning, and corporate, tax and business transactions.
AI tools often drive efficiency and save money, but they have drawbacks. Here's what to know.
United States Technology

AI tools often drive efficiency and save money, but they have drawbacks. Here's what to know.

Business leaders are considering new AI tools that grow more sophisticated each day. For closely held businesses on the lookout for innovative and tech-enabled cost savings, AI can offer a competitive edge by taking on administrative tasks, speeding up the creative process, and analyzing data. Early adapters benefit by learning along with their AI tools. Businesses that forego AI altogether risk being left behind.

As an emerging technology, AI is rapidly improving, but you cannot blithely take your hands off the wheel when you implement it. Like any new tool, AI can present unexpected legal, reputational and practical risks. We will consider three real-world AI use cases that illustrate these hazards for closely held businesses, along with solutions to help leverage AI safely and effectively.

How to Minimize Risks for 3 Common AI Uses

Three currently common AI use cases are real-time notetaking, drafting of policies and interacting with customers. Each use poses unique and critical risks with respect to privacy violations, compliance problems, and cybersecurity threats, which often stem from a lack of understanding or appreciation for how these tools work.

AI Notetaking & Transcription

AI notetaking and transcription is both a popular and relatively straightforward use case, allowing call participants to more actively engage in the conversation without scrambling to catch every word. Such tools both transcribe what's discussed and can also generate notes, summaries and action items. Notetakers no longer need to join a call just to transcribe the conversation either, freeing them up for other tasks and greater productivity.

Leaders of closely held businesses should think carefully before turning on an AI transcription or summarization tool—especially during sensitive discussions. These tools may pose privacy risks in such conversations, particularly if the tool's vendor has access to the recordings and notes for training purposes. Improper storage of that information also raises the specter of data leaks if the vendor's cybersecurity is breached.

Most critically, using these tools in board meetings can lead to serious unexpected consequences when trying to sell your closely held business or in the event of a lawsuit. These recordings and summaries may be discoverable in the due diligence or litigation process. There may also be requirements for participants' consent to the recording.

A human scrivener might employ discretion to omit sensitive or unnecessary details. AI may not understand such nuance. It does what it was designed for: transcribing verbatim and producing comprehensive summaries. AI-generated notes can also contain errors regarding the content discussed, causing the business to rely on erroneous data in making a decision. For these reasons, a human should review AI transcriptions for notes of key meetings.

Before incorporating AI tools in sensitive situations like board meetings or HR calls, make sure to understand how the tool works, how the vendor stores and uses data, and the potential downstream legal risks if this information is disclosed to outsiders.

Policy and Document Drafting by AI

The impulse is understandable: let AI draft a document you would otherwise seek from an attorney. After all, ChatGPT passed a bar exam, right? Indeed, many law firms are looking at ways to implement AI for basic legal tasks. For now, however, ChatGPT and other generative AI platforms—especially those that are free and general purpose—tend to make poor lawyers.

For example, a business asking AI to write a website privacy policy for its customers would hope to save time and legal expense. Unfortunately, the "dumb" nature of AI will likely surface; it does not know your business. Even if you specify an industry, it will generate what it considers the average of all privacy policies—one that does not account for where your business operates, the data it processes, the products or services it provides, or the unique considerations of your industry. Generative AI can also hallucinate, producing clauses that have no place in a legal document.

Like meeting transcriptions and notes, a misworded policy and disclosure can have expensive legal and reputational consequences. What's more, a lawyer is likely to spend more time reviewing and editing AI-drafted material (perhaps rewriting it entirely)—making cost savings negligible.

Consider whether generative AI is capable of the task at hand. Highly complex and nuanced work products (especially those with legal impacts) should be undertaken carefully, with close human supervision, and might be better left until AI technology advances further.

No matter who drafts a document using an AI platform, if the user enters confidential or trade secret information, it could become public. Even if that does not happen, a closely held company could find its trade secrets at risk if it fails to reasonably protect such information through adequate guidance to its employees. To mitigate such risks, a business can prohibit use of such platforms altogether or preclude the entry of any personal, confidential or trade secret information. Unless it has a treasure trove of first-party data on its hands, most closely held businesses are unlikely to set up a proprietary AI platform of their own, but that option may prove more feasible as technology continues to advance.

Augmenting marketing and customer interactions

Closely held businesses looking to save on marketing costs and boost response times with customers might be tempted to turn to AI for help. Generative AI-enabled chatbots, like those using Claude, Gemini, and ChatGPT, could assist customers with routine issues and save human resources and customer service personnel for more complex problems. AI-generated images could cut costs and help market products more creatively.

Remember, however, that as convincingly human as AI chatbots can be, they can struggle with emotionally charged situations, like when a customer is frustrated. Importantly, they can even give incorrect responses due to hallucinations—which could cost a company financially and reputationally, as Air Canada recently discovered when forced to honor an AI-hallucinated discount. These chatbots can also be hacked, creating legal risks if sensitive customer or employee information is accessed by cybercriminals.

Marketing use cases also require caution. A closely held business with a sterling reputation and trusted relationship with its customers could put that goodwill in jeopardy if AI-generated marketing images prove misleading about the nature or quality of the products or make claims that are obviously false. Legally, regulators like the Federal Trade Commission may also investigate deceptive or manipulative marketing materials generated by AI—even if it was unintentional.

Text marketing campaigns could also incur fines (and customers' ire) if AI tools send autodialed messages without prior express written consent, further emphasizing the importance of close human supervision.

Put AI use policies in place and keep a close eye on generative AI outputs for accuracy and compliance with applicable regulations. Consider whether the use of these tools aligns with the company's culture and whether it could imperil longstanding customer relationships.

Put AI Guardrails in Place Now

There's no doubt that AI tools can be a real asset for closely held businesses and that leaders should start experimenting with them now. Just remember to start small, test tools before you integrate them, and prioritize data privacy and security.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More