Q: I recently started using a generative AI program to publish a weekly newsletter that I send to my company's network. What are some issues I should be aware of?

A: Organizations that rely on generative artificial intelligence to communicate with their networks need to review the content of all outgoing communications. While generative AI can produce ample text and art with minimal instructions, that content may not represent the organization well or it may be a hallucination, i.e., a response generated by an AI system that is presented as fact, but contains false or misleading information. Examples of this phenomenon include situations in which a generative AI application falsely identified a mayor as being convicted of bribery and a law school professor as being accused of sexual harassment. If the newsletter contains general advice to customers and potential customers, relevant staff should review the text to make sure it is accurate.

If the communication contains customer-specific information, the organization should think carefully about entering that information into the AI application. Per the terms of use for many publicly available generative AI programs, users grant the program's parent company broad rights to use information they provide. Unless the organization has a separate contract to use the AI that establishes greater privacy and confidentiality obligations, it should avoid providing sensitive customer information to generative AI programs, particularly if the organization has confidentiality obligations to the customer.

Some users are also relying on generative AI programs to create original art for their newsletters and client communications. That could cause a problem if their intention is for that art to be part of their ongoing branding. Text, images, video, music, etc. generated autonomously by AI does not receive the same intellectual property protections as content produced by human beings. The United States Copyright Office has consistently rejected applications for copyright protection when there is no human author, as required by federal law. By relying on images and other branding generated by AI applications, organizations could find that competitors and other third parties can co-opt their art.

Organizations should also be careful about the representations they make about their use of AI. Although an AI-generated newsletter is unlikely to cause problems, the Federal Trade Commission and state attorneys general, through consumer protection statutes, police "unfair and deceptive" behavior in commerce. Organizations that say one thing about AI and do another may come under scrutiny.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.