Most systems do not protect sensitive information used in prompts, and users bear most of the risk of using generative AI systems and outputs.


The following is an edited version of the content presented in the video.

Generative AI systems use the information provided in prompts - along with the data that the systems were trained on - to create outputs. Each system has its own rules for how information that is provided by users is protected, whether that information can be used for training, and how outputs can be used by users and the system itself. Many, if not all, of these rules are contained in the system's terms of use.

Companies should pay close attention to the terms of use of any generative AI system they use to ensure that they understand their rights and can protect their privacy.

The terms of use usually state that information provided as prompts or inputs is not considered confidential and that the system will use inputs for training in the future. Some AI companies let users chose to have their inputs excluded from training, but that does not mean that inputs will be treated confidentially. In most cases, inputs will not be adequately protected, and users should not enter confidential or sensitive information as prompts, including personal information or proprietary intellectual property such as software source code.

Terms of use often state that the user owns the outputs resulting from prompts they enter into the generative AI system. This may seem desirable, but users should consider some nuances to understand the implications of this arrangement.

Even if the terms of use indicate that the user owns the output, ownership of IP rights for generative AI outputs is not clear under existing copyright and patent law, and enforcing IP rights may be difficult or impossible. For more detail, see "How do intellectual property rights apply to generative AI outputs?"

Moreover, the terms of use might still include restrictions on how outputs can be used, particularly on uses that facilitate competition with the AI system that generated the output. Users should check in advance to be sure that their use case is allowed by the system's terms of use.

In some cases, terms of use may indicate that the generative AI company has an express right to use outputs, even when the user owns them. Some even stipulate that the system owns the output, although this is still not very common. In such cases, it is important to understand how the user is permitted to use outputs and whether the permitted uses are adequate.

Importantly, most terms of use stipulate that users are responsible for all risks associated with using the generative AI system, including the outputs, with a total disclaimer of all representations and warranties. In such cases, no indemnity or other protection is offered by the API provider.

Indeed, standard terms of use typically include aggressive representations and warranties, and other protections, that protect the generative AI company, including an indemnification against any infringement claims arising from the user's input and the tool's output. In most cases, the user owns all risk related to the output of the AI tool, so it is critical for users to conduct thorough due diligence on any output before using it.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.