Published: Boston Business Journal

September 1, 2023

ChatGPT – the artificial intelligence application from OpenAI that can provide detailed, lengthy natural language answers in response to user questions – famously attracted 1 million users in its first five days, and more than a billion users within three months.

The program is possibly the most well-known example of generative AI, an artificial intelligence technology that identifies patterns in large quantities in training data and then generates original content – text, images, music, video, etc. – by recreating those patterns in response to user input. Other examples include Google's Bard (which also produces natural language), DALL-E 2 (images) and Synthesia (videos). As ChatGPT's statistics suggest, the adoption of generative AI is on the rise, and Bloomberg has reported it will be a $1.3 trillion market by 2032.

Organizations may see their employees using the technology personally and professionally and should explore AI use policies in response. These policies should address the applications' known liabilities, while encouraging employees to experiment with identified strengths. Liabilities include privacy, limited intellectual property protection and questionable accuracy. Strengths include initial research and brainstorming assistance.

Privacy policies and terms of use governing AI applications

Like many web-based programs, generative AI applications tend to have terms-of-use agreements and privacy policies granting their parent companies wide-ranging rights to use information they receive. For example, per ChatGPT's terms of use, user information can be used to maintain the "Services," a broad term that includes "software, tools, developer services, data, documentation," etc. Per the privacy policy, personal information provided to ChatGPT can be used to "improve our Services," "develop new programs" and "analyze the Services." These amorphous terms are not unusual for a website, but employees are likely to provide greater information to, and expect complicated projects from, generative AI programs. An organization's generative AI policy should prohibit the sharing of confidential company information and any nonpublic customer information with the applications.

Intellectual property, copyright protection and AI

The current position of the U.S. Copyright Office is copyright protection is only for works created by a human being. That means content created by generative AI is not eligible for copyright protection. An AI-generated work may be eligible for protection if a human sufficiently alters it, but only the portions authored by a human being. That means organizations need to clearly state when and how employees may use content produced by generative AI to avoid relying on text, images or other media it cannot copyright. Organizations should also be aware there is debate as to whether certain developers of generative AI applications have violated copyright law by including protected works in the training data they used for their platforms, which may eventually prove problematic for content those platforms produce.

Ensuring accuracy of AI-generated content

Generative AI users need to know the applications are not always accurate. Bard's terms of use state in bold font: "The Services may sometimes provide inaccurate or offensive content ... Use discretion before relying on, publishing or otherwise using content provided by the Services. Don't rely on the Services for medical, legal, financial, or other professional advice." There have been prominent examples of ChatGPT giving incorrect information, including a New York court brief that quoted fictional case law and a professor who was incorrectly identified as having been accused of sexual harassment. Employers should require their employees independently verify any information provided by generative AI.

Acceptable resource for initial research

However, if employees are going to confirm background information from ChatGPT, Bard, etc., those applications can be safely used for first impressions of a research topic, similar to how many people use a Google search or Wikipedia. Typing a series of basic inquiries into a generative AI program can be a useful shortcut to learn about a new issue, so long as there is follow-up research to identify incorrect information.

Effective tool for brainstorming

Prompts to generative AI platforms can be very useful in producing content that gets the human mind thinking about a subject in a new way. A paragraph outlining the weaknesses in a client proposal or an image to inspire human-created graphics are excellent uses of the technology's capabilities. They can help employees produce better projects and ideas.

Although the liabilities described above (and others) should give organizations pause as their employees explore generative AI, there are numerous functionalities organizations will want. A properly drafted policy will help you address both issues and help your group incorporate generative AI in a smart way.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.