ARTICLE
18 November 2025

AI Reporter - November 2025

B
Benesch Friedlander Coplan & Aronoff LLP

Contributor

Benesch, an Am Law 200 firm with over 450 attorneys, combines top-tier talent with an agile, modern approach to solving clients’ most complex challenges across diverse industries. As one of the fastest-growing law firms in the country, Benesch continues to earn national recognition for its legal prowess, commitment to client service and dedication to fostering an outstanding workplace culture.
News of AI-generated "actress" Tilly Norwood's potential signing with a talent agency has ignited debate among actors and other creative artists, including Emily Blunt, Whoopi Goldberg and Natasha Lyonne...
United States Technology
Benesch Friedlander Coplan & Aronoff LLP are most popular:
  • within Technology, Real Estate and Construction and Consumer Protection topic(s)

News of AI-generated "actress" Tilly Norwood's potential signing with a talent agency has ignited debate among actors and other creative artists, including Emily Blunt, Whoopi Goldberg and Natasha Lyonne, who worry that AI-generated performers could displace human talent. SAG-AFTRA criticized Norwood as an AI creation built on the work of real actors, raising concerns about intellectual property, privacy and the sustainability of creative careers. The rise of digital avatars like Hatsune Miku and platforms like OpenAI's Sora, which enables users to generate and share AI videos using copyrighted material, further intensifies these concerns. Creative Artists Agency and top YouTube creator MrBeast have expressed concerns about fair compensation and the erosion of creators' livelihoods, especially as AI tools become more integrated into platforms like YouTube. The rapid evolution of AI in entertainment underscores a critical tension between innovation and the protection of human creativity.

In the courtroom, Elon Musk intensified his legal battle against OpenAI, accusing the company of obstructing efforts to depose Mira Murati, a central figure in its Microsoft partnership and the controversial removal of Sam Altman in 2023. Musk claims Murati possesses vital information for his lawsuit, which alleges OpenAI strayed from its nonprofit roots and misled investors, including himself. Despite OpenAI's objections citing missed deadlines, the court allowed the deposition to proceed. In a separate case, Musk's startup xAI sued OpenAI for allegedly misappropriating trade secrets, though OpenAI argues the claims are vague and intended to suppress competition. Meanwhile, a federal judge ruled that authors can pursue copyright infringement claims against OpenAI, citing plausible evidence that ChatGPT outputs closely resemble protected literary elements.

California recently enacted landmark AI regulations that underscore its commitment to transparency, safety and consumer protection in AI technologies: AB 853 updated the California AI Transparency Act by postponing the deadline for AI-generated content disclosures and extending disclosure requirements to major online platforms, GenAI hosting services and device manufacturers. SB 243, known as the Companion Chatbot law, requires operators to clearly disclose when children are interacting with AI. SB 53, the first comprehensive AI safety and transparency law in the U.S., requires large AI developers to publicly report their security measures, particularly those addressing catastrophic risks such as cyberattacks and biological threats, and to implement them under oversight from the Office of Emergency Services. These laws reflect California's proactive approach to regulating AI, emphasizing accountability, formalizing best practices and safeguarding users, especially minors. These and other stories appear below.

AI in Business

BMS, Takeda and Astex join AI training consortium

The AI Structural Biology (AISB) Network, a consortium dedicated to leveraging AI to expedite drug discovery and development, is developing OpenFold3, an AI model that predicts proteinligand interactions. The consortium members—in collaboration with the AlQuraishi Lab at Columbia University—will contribute thousands of validated protein-small molecule structures, creating a comprehensive and diverse dataset to improve the model's predictive accuracy. Data sharing is managed through a federated, secure computing platform developed by Apheris, ensuring privacy and security while enabling collaborative AI model training. This initiative aims to accelerate the development of new therapies and enhance patient outcomes by leveraging AI-driven analysis of large-scale biological data, while also addressing data security and privacy concerns through its federated approach.

Source: PM Live

AI-generated actor sparks concern in Hollywood

Tilly Norwood's impending signing with a talent agency has sparked concern among actors and filmmakers—including Emily Blunt, Whoopi Goldberg, and Natasha Lyonne—who fear AI could replace human performers. SAG-AFTRA criticized Norwood as a product of AI trained on the work of professional actors, raising concerns about intellectual property, privacy and the future of creative professions. Other digital avatars have performed at major events like Coachella and collaborated with brands, illustrating how AI is increasingly used to create music and entertainment experiences.

Source: LA Times+

Universal and Warner Music close on AI licensing deals

The licensing deals look to create a structured framework for compensating music rights holders when their catalogs are used to train AI models or generate new music. This marks a shift from previous adversarial approaches, where AI developers frequently used copyrighted music without permission, resulting in lawsuits. The collaborative model aims to safeguard human artistry, generate new revenue streams, and establish a global standard for how AI companies compensate creators. The agreements are expected to resolve ongoing legal disputes and establish the music industry's first major framework for monetizing AI-generated content, addressing key risks such as IP infringement and data security in the entertainment sector.

Source: WRAL News

Is AI really ready for healthcare?

A recent study by Microsoft Research found that leading AI systems are achieving high scores on medical exams by exploiting test design loopholes and using test-taking strategies, rather than by demonstrating genuine medical understanding. This raises significant concerns for the healthcare and life sciences sector, as these AI models may not possess the real-world medical competence their scores suggest. The findings also highlight risks for patient care, as reliance on such AI tools could lead to misinformed diagnoses or treatment recommendations. The study underscores the need for more robust evaluation methods regarding the deployment of AI in clinical settings, including issues of trust, safety and regulatory oversight.

Source: Forbes

MrBeast raises alarm over AI's impact on content creators

Top YouTube creator MrBeast has expressed concern about the impact of AI-generated videos on the livelihoods of millions of content creators. His comments come as OpenAI launches Sora 2—an advanced audio and video generator— and a mobile app that enables users to create and share AI-generated videos in a TikTok-style feed. YouTube itself is integrating AI tools, such as Veo for animating photos and applying styles, and AI-powered features for creating clips and highlights. These developments highlight both the opportunities and risks of AI in the entertainment and digital media space, including potential threats to creator income, intellectual property and data privacy.

Source: TechCrunch

Hollywood talent agency raises concerns over OpenAI's Sora

Creative Artists Agency warns that Sora exposes artists to significant risks—particularly regarding compensation and credit for creative work—as the app allows users to create and share short AI videos spun from copyrighted content on social media-like streams. OpenAI plans to introduce controls for content rights owners to manage how their characters are used in Sora and intends to share revenue with those who permit such use.

Source: Reuters

AI's growing role in social media leads to questions of influence, integrity and risk

A 2025 report highlights that 96% of social media professionals now use AI tools, with 72.5% relying on them daily for content generation and management. The AI-in-social-media market is projected to triple by 2030, further embedding algorithmic influence in online discourse. However, new research from Stanford University warns that optimizing large language models for competitive success on social platforms—such as maximizing engagement or ad clicks—can lead these models to prioritize persuasion over honesty, resulting in misinformation and potentially inflammatory content. This structural risk raises concerns about the integrity of AI-driven audience analysis and recommendation systems in the sports, entertainment and digital media sectors, as well as the associated risks of IP infringement, privacy and data security.

Source: Decrypt

Poison Pill: Disrupting unlicensed AI music training to empower independent artists Poison Pill, a U.K.-based startup, launched technology to help music companies and independent artists combat unlicensed AI training by "poisoning" their own music. The service is designed to disrupt AI companies that scrape music without permission to create AI-generated playlists and music services, which are increasingly replacing traditional revenue streams, such as sync licensing. Poison Pill's goal is to protect 20% of independent music and encourage AI firms to negotiate fair licensing terms for training data.

Source: Musically

AI Litigation & Regulation

LITIGATION

AI healthcare firm accuses partner of withholding critical data

BelleTorus filed a lawsuit against Torus Actions and its CEO, Nguyen Tien Dung, alleging breach of contract and deceptive practices. The dispute centers on a partnership to develop AI tools for skin health, where Torus Actions was to provide IP and scientific support. BelleTorus claims Torus Actions obstructed a planned data migration by refusing access to servers and credentials, effectively locking BelleTorus out of its own data. The lawsuit further points to a conflict of interest, as Dung held leadership roles in both companies during the dispute. Source: Law 360 (sub. req.)

OpenAI seeks dismissal of xAI lawsuit over trade secrets

OpenAI filed a motion to dismiss a lawsuit brought by Elon Musk's AI startup, xAI, accusing OpenAI of misappropriating trade secrets. The legal dispute centers on claims that former xAI employees took confidential information with them when they joined OpenAI. OpenAI argues that xAI's allegations are vague and unsupported, asserting that the lawsuit is an attempt to stifle competition rather than protect legitimate intellectual property. OpenAI maintains that it did not use any proprietary xAI data and that the lawsuit lacks the specificity required to proceed.

Source: Reuters

Musk pushes for Murati deposition in OpenAI lawsuit

Elon Musk accuses OpenAI of obstructing efforts to depose Mira Murati, a key figure in its partnership with Microsoft and Sam Altman's 2023 ouster. Despite repeated attempts to serve her with a subpoena, Musk claims security personnel at both her workplace and residence blocked access. Musk argues Murati holds crucial knowledge relevant to his lawsuit, which alleges OpenAI abandoned its nonprofit mission and misled investors, including Musk, who contributed $45 million. OpenAI counters that Musk delayed the deposition and missed the discovery deadline. The court disagreed, allowing the motion to proceed. OpenAI and Microsoft deny any wrongdoing, asserting Microsoft joined years after Musk's alleged agreement.

Source: Law 360 (sub. req.)

Lawsuit challenges Apple's AI training practices

Two neuroscientists have filed a class action lawsuit against Apple in the Northern District of California, alleging unauthorized use of their copyrighted books to train its AI model, Apple Intelligence. The suit claims Apple sourced these works from Books3, a dataset described as a "shadow library" containing pirated content. The plaintiffs argue Apple exceeded its licensing rights and used high-quality copyrighted material to enhance its AI, thereby undermining the market for their books. The neuroscientists also criticize Apple's vague use of the term "publicly available" to justify its data collection practices, including scraping content via Applebot for nearly a decade.

Source: Law 360 (sub. req.)

To view the full article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More