Artificial Intelligence has unleashed a whirlwind of disruptions in the entertainment industry, redefining the way we create, consume and connect. Behind the scenes, AI algorithms are revolutionizing production processes, enhancing visual effects and, of course, creating controversy. Let's take a look at how AI has recently disrupted the entertainment and sports industries and the legal ramifications that could follow.

AI strikes again.

For the first time in 15 years, the Writers Guild of America is on strike. The strike, which began on May 2nd, continues to result in a work stoppage for a crucial part of the entertainment industry. Millions are already feeling the impact of the strike: TV shows are stalled, talk shows are going darkand red carpets have been rolled away. Among other demands, the WGA is seeking industry-wide regulation of AI to prevent studios from artificially generating literary and source materials, as well as training AI on works created by WGA members.

Put simply, generative AI systems use a combination of machine and deep learning algorithms in order to predict the next phrase in a text based on source materials. AI can be trained to sort through hundreds of thousands of pages of a creative's works in an infinitesimal fraction of the time it would take a human person and produce a product in their original style.

As discussed in AI Image Generations: Drawing Infringement Claims, Not US Copyright Protection, using copyrighted materials to "train" AI (which will in turn create new works) raises challenges in the world of copyright. While the courts have not yet definitively answered whether this type of use constitutes copyright infringement, it is clear that the WGA is looking to the industry to take a stance.

The WGA strike serves as a topical reminder on how influential unions and industry groups can be when it comes to shaping laws and policies. Copyright issues aside, the outcome of the WGA strike will likely set a precedent for the treatment and regulation of AI across the entertainment industry and related fields, such as journalism and graphic design.

All (obligatory) apologies.

While AI may be partially responsible for WGA members being off work, some speculate that Ja Morant has recently been putting it to work. The acclaimed Memphis Grizzlies player has been suspended pending investigation after exposing a gun on Instagram Live on two separate occasions. Following the backlash, Morant issued an apology that fans speculate was AI generated.

So why issue an apology if it isn't from the heart? Morant is a public figure who faces increased scrutiny for his actions due to his high profile and professional obligations. As a party to several multi-million dollar sponsorship deals, it is very likely that he is contractually obligated to comply with certain standards of behaviour, known as "morality" or "morals" clauses. Morant's assisted effort at an apology likely represented an attempt to forestall enforcement of the remedies for breach of such a clause.

Morality clauses in the pre-AI era.

Morality clauses allow brands to unilaterally terminate a contract or otherwise take remedial action when sponsored talent engages in behaviour that may be harmful to the brand's image - such as flashing a handgun in a bar. In an era of cancel culture and an ever-evolving digital landscape, a clear and comprehensive morality clause has never been a more valuable tool to mitigate a brand's risk, and has never been more challenging to draft.

While they are common, morality clauses receive pushback from talent. Their concerns might include overbreadth in the scope of behaviour and vague language used, leaving room for arbitrary enforcement and unfair consequences. Moreover, the current cancel culture phenomenon creates fear that morality clauses could be weaponized to silence dissenting voices or suppress unpopular opinions. Opponents of broad morality clauses insist on clearer definitions, greater transparency and fairer implementation to ensure they are not used as a tool for censorship and discrimination.

In Morality Clauses in the Era of Social Media Scandals: What Brands Should Know, we discuss several factors to consider when drafting a thorough and purposive morality clause, including when they should be triggered - based on when the acts or allegations become public or only when they actually occurred. In the age of social media when historical online activity can resurface years later, this temporal consideration is critical. As a result, sponsors will often insist on drafting the morality clause to capture past activity that later becomes widely known.

Typically, talent will push back on this by arguing that past behaviour should not be grounds to terminate a contract as it is the sponsor's job to do their due diligence in reviewing everything that the talent has previously posted online. Although this may have been a tall order in the past, thanks to AI, this task is suddenly becoming more achievable.

Due diligence and the future of morality clauses.

Unprecedentedly, AI is being developed to detect prior behaviour in brand affiliate candidates which may conflict with a brand's values, beliefs, and attitudes - this includes analyzing talent's social media pages for locations, mentions and even emoji use. Even more impressively, by looking at talent's past social media behaviours, AI tools can identify whether it is likely that talent will engage in activity that would violate the terms of their contract by, for example, infringing third-party copyright.

Such promising advancements in AI technology have the potential to reshape the landscape of morality clauses, making them a less urgent protection to include in contracts. In theory, past violations and contentious patterns of behaviour will be identified prior to signing, providing a comprehensive understanding of an individual's character and reputation (at least as reflected online).

With AI-driven insights, brands may rely on more objective and data-driven evaluations of potential talent, reducing the need for subjective provisions in morality clauses. Additionally, AI-powered reputation management tools could enable proactive monitoring and mitigation of potential risk, allowing brands to address concerns before they escalate.

AI is making waves in unexpected ways - not only impacting our media, but our interpersonal interactions and perceptions of public figures. With this, issues of privacy concerns, ethical implementation of AI for risk assessment and the need for industry-wide standards and guidelines against the misuse of AI are long overdue. The good news is - we can probably get AI to draft them.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.