Anyone who has been following the explosive growth of generative artificial intelligence (AI) tools knows that there are a wide variety of AI tools available for general public use.
For professional marketers who want to use these tools to generate ideas, images, social media posts or otherwise expedite the process of developing client output, it is important to be mindful of the legal risks associated with the use of AI tools. At a high level, there are four legal risks to keep top-of-mind when using AI in marketing.
Intellectual property and privacy risks
- Ownership of generated content: The content you generate using AI may not be owned by you, your company or your clients. This is because Canadian copyright law protects "original works of authorship," which means the content must be a product of the creator's "skill and judgment." There is an ongoing legal debate about whether the prompts used to generate AI content meet this criterion.
- Intellectual property infringement: AI-generated content could infringe on someone else's intellectual property rights. This is particularly problematic because AI tools are often trained on copyrighted material. As a result, these tools can inadvertently generate content that is too similar to existing works, and it can be very difficult to detect when an AI tool has generated copyrighted material.
- Privacy and security risks: Using AI tools can risk your employer's or client's privacy and security. AI systems often require access to large amounts of data, which can include sensitive information.
Other liabilities
You, your employer or your clients could also be held responsible for the use of AI-generated content that is affected by AI bias or AI hallucinations.
- AI bias takes various forms but the most basic definition is that AI bias refers to the systemic errors and prejudices embedded within machine learning algorithms that influence outputs and are based on the data used to train the AI models. Put simply, because of AI bias, the tools may overlook or misrepresent minority groups and cause reputational problems for your clients.
- AI hallucinations are responses generated by AI that contain false or misleading information presented as fact. Some people like to use AI as a research tool and will take what they are presented as fact. This is problematic and can cause a lot of trouble for anyone using it this way. This could be very problematic if the false information is used in a marketing campaign, which could seriously harm your client's reputation and damage their brand.
How can you mitigate these risks?
To mitigate these risks, consider the following:
ONE. Understand terms of use: Each AI tool has "terms of use," which is essentially a contract between the company providing the tool and the user. Ensure you understand these terms, as the terms may preclude your ability to claim ownership over any content generated or use the generated content in a commercial context.
TWO. Conduct thorough reviews: Always review AI-generated content for potential intellectual property infringements. This can be challenging, but it's crucial to avoid costly litigation down the road. For example, simply reproducing AI-generated content in externally facing communications – like marketing and customer communications – can present copyright infringement problems.
THREE. Verify information: Be cautious when using AI for research. Remember the early days of Wikipedia and verify the information generated by AI tools to avoid the spread of false or misleading information.
FOUR. Avoid AI washing, which is a deceptive marketing tactic that consists of promoting a product or a service by overstating the role of AI integration. As marketers, it is extremely important to avoid AI washing. Like with AI hallucinations, this has the potential to damage both a client's brand and your own brand.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.