ARTICLE
7 May 2025

AI Twins And Avatars: Legal Risks For Companies Using Synthetic Voice And Likeness Technology (Video)

Traverse Legal

Contributor

In 2004, Traverse Legal was a start-up. We created a brand new business model for the law that is now used by some of the biggest law firms in the country. We invented and incorporated technology into our process and client relations, which are still innovative and unique. We have represented clients of all types in connection with technology, internet law, intellectual property, and business matters. We can help you.

As a niche law firm with controlled overhead and specialized practice areas, we can provide more cost-effective, knowledgeable, and strategic representation than the large law firms we go up against every day. Our clients are based in over 25 different countries around the globe. There is a reason why some of the largest and most successful companies in the world select Traverse Legal to handle matters within our areas of experience.

The rapid advancement of AI-generated avatars, commonly referred to as "AI twins" is transforming how businesses delegate tasks, interact with customers, and scale their brand.
United States California Technology

The rapid advancement of AI-generated avatars, commonly referred to as "AI twins" is transforming how businesses delegate tasks, interact with customers, and scale their brand. A founder can now deploy a digital version of themselves to attend meetings, pitch investors, or onboard users without ever hitting "join" on a Zoom call. Enterprise teams are experimenting with synthetic voice assistants that sound eerily like trusted public figures. And entire product demos can now be delivered by virtual replicas of real employees.

But behind the convenience of outsourcing human presence lies a growing thicket of legal exposure. As more startups and companies deploy AI likenesses; whether based on internal team members, hired talent, or public figures the question isn't if there are risks. It's how big, how complex, and how avoidable they are.

This article unpacks the core legal issues AI companies must address when using synthetic representations of real people. From copyright to publicity rights, the risks are real, and the time to address them is now.

Who Owns the Work Created by AI Twins?

This is one of the most fundamental (and frequently misunderstood) issues facing AI companies today. Under U.S. copyright law, creative works must be authored by a human to receive protection. If an AI twin generates a training video, records a podcast episode, or even delivers a sales pitch, can that output be copyrighted? And if so, by whom?

If a human is behind the script, but the avatar delivers it, that's one layer. If the AI twin improvises, adapts, or performs content in a way that mimics human creativity, courts may view that differently. Current U.S. Copyright Office guidance (and caselaw like Thaler v. Perlmutter) confirms that non-human-generated content is not copyrightable. There must be a human element of authorship—whether in scripting, direction, or creative input.Either way, the company using the avatar must be clear on ownership; not just of the likeness, but of what the avatar produces.

Now consider a common scenario: your company licenses a digital likeness from a creator or influencer, then uses that AI twin to generate customer-facing content. While you may own the deliverables, you might not own the underlying likeness; or the legal rights to continue using the outputs if that license ends. Things get even murkier when models or voice data are licensed from third-party providers without clear downstream usage terms.That's why relying on default copyright rules isn't enough. Companies must proactively set the terms.

Key takeaway: Contracts are not just a formality. They're your primary defense. Agreements with both developers (who build or train the AI twin) and talent (whose likeness or voice is being used) should explicitly assign copyright rights, cover derivative works, and define exactly who owns the outputs—now and in the future.

Publicity Rights & Likeness Use

One of the most immediate legal risks of using AI avatars is violating a person's right of publicity. The legal right to control how one's name, image, voice, or likeness is used for commercial purposes.

Unlike copyright, the right of publicity is governed by state law. That means protections vary widely, but in many states, this right extends beyond just celebrities. Employees, creators, influencers, and even private individuals may have a claim if their likeness is used in a product without their permission.

This matters deeply in the context of AI twins. If you create a digital avatar that looks or sounds like a real person. Even if generated or altered using AI, you may be held liable if that person didn't give express permission. Courts have shown that even "transformative" outputs, like stylized or AI-modified representations, may still infringe on publicity rights if they evoke a recognizable identity.

Take, for example, a startup that trains its AI assistant to sound like a well-known actor or athlete. Even if the output is novel or altered for a comedic app, that's not a free pass. Parody, innovation, or artistic intent don't override commercial misappropriation. Legal battles around deepfakes, influencer cloning, and unauthorized voice synthesis are already testing these boundaries.

What should companies do? Get ahead of the risk before a model is trained or an avatar is deployed.

A well-drafted publicity release is critical. It should go beyond basic image use and include clear rights for:

  • AI training and dataset inclusion
  • Derivative works and modified outputs
  • Future and posthumous use, especially if the avatar may be used after a contract ends or a creator passes away

Also, don't treat publicity rights as one-size-fits-all. States like California, Tennessee, and New York have some of the strongest protections, including post-mortem rights that can last for decades. If you're building a product used nationwide, or training models on a wide range of individuals, state-by-state compliance isn't optional.

The bottom line: Consent isn't a checkbox. It's a shield. Companies that rely on AI likenesses must treat publicity rights as core infrastructure, and not as an afterthought.

Voice Imitation and Synthetic Speech Risks

Synthetic voice is one of the most compelling and legally risky areas in the AI likeness space. If your AI-generated voice sounds too much like a real person, you may be entering dangerous territory.

When an AI mimics a specific voice pattern or tone, especially that of a recognizable figure, it can trigger claims of "voice theft," misappropriation, or even false endorsement. And unlike a generic avatar, voice is deeply tied to identity. Courts have recognized that a voice can be as distinctive and legally protected as a face.

Even if you don't name the person being imitated, if the voice is clearly evocative of a public figure, or if users are led to believe a real person was involved, that's where the legal risk sharpens.

This isn't just a theoretical concern. The Federal Trade Commission (FTC) has already signaled that misleading use of AI-generated content, including synthetic voices, could fall under deceptive marketing practices. If an AI voice implies that a celebrity or influencer endorses your product when they don't, the FTC may step in, and the penalties can be severe.

So how can companies stay compliant? Two practical steps make a major difference.

First, always disclose when a voice is synthetic. If users believe they're hearing a real person or worse, a real endorsement, you're walking a legal tightrope. Clear, accessible disclosures (especially in ads or customer-facing experiences) help reduce risk.

Second, keep a documented audit trail. You should be able to show how the voice model was created, what data was used, and that any real human contributors gave informed, written consent. This is especially important if you're using contractors, third-party voice libraries, or open-source models trained on scraped content.

Without those safeguards, even unintentional voice imitation can escalate into a legal challenge, one that's harder to defend after the fact.

Licensing Agreements for Likeness, Voice, and Personality

When you're building AI avatars or using real people's voices and likenesses, contracts aren't just paperwork; they're your legal backbone. A strong licensing agreement doesn't just protect you from future disputes; it gives your product the green light to scale responsibly.

At minimum, your contracts should spell out the duration, territory, and scope of use. That means getting specific: Are you using the likeness for training only, or will it also power customer-facing experiences? Is the usage global or limited to a certain market? Will content be used in ads, internal tools, or both?

Then there's the matter of exclusivity. Are you the only one who can use the person's AI likeness, or are they free to license it to others? This becomes a business differentiator when talent likeness becomes a brand asset.

Equally important is what happens after the agreement ends. A well-structured contract should include revocability clauses and post-use obligations, like requiring the company to take down AI-generated content or deactivate avatars built using the licensed data. Without this, companies risk dragging legal baggage into future product cycles, or worse, into court.

One emerging best practice especially for companies operating internationally, is including a moral rights clause. U.S. law doesn't formally recognize "moral rights" for likeness or voice, especially in the way other jurisdictions like the EU do. But that doesn't mean they should be ignored. In practice, moral rights clauses serve a different function: they build trust with talent, signal respect for creators, and anticipate reputational concerns that could spiral into disputes later. Even if unenforceable under U.S. copyright law, these clauses send a message—you're thinking ahead, not just legally, ethically, and relationally. And for global companies, or those working with international creators, moral rights protections aren't optional. They're the price of admission. For example, an artist may object to their AI twin being used in ways that conflict with their values or reputation.

Including such clauses signals respect for the human behind the likeness, and increasingly, it's what talent, influencers, and creators are asking for. It's also a sign to regulators and the public that your company is thinking ahead.

Bottom line: Licensing agreements are no longer one-page releases. In the AI era, they need to be structured for nuance, consent, and change.

What Talent Should Know Before Lending Their Voice or Image

For creators, influencers, voice actors, and even everyday professionals, the idea of having your own AI twin can be exciting, and potentially lucrative. But the legal and reputational risks are real. Once your likeness or voice is digitized and integrated into a product, it can take on a life of its own.

Before signing away rights to your identity, it's worth slowing down and asking a few critical questions.

Can the avatar say things I don't agree with?

Unless your contract sets limits, the answer could be yes. Without guardrails, your AI twin might one day promote products, causes, or viewpoints that conflict with your values—or worse, harm your reputation.

Can the company sell or license my voice/image to others?

If the agreement allows sub-licensing or model reuse, your voice or likeness could end up in products you've never heard of, let alone approved. Control over downstream use is one of the most overlooked clauses in AI likeness deals.

Will I be paid ongoing royalties, or just a one-time fee?

Make sure compensation reflects the scope and lifespan of the license. A flat fee may not be fair if your avatar is being used at scale, in multiple markets, or generating significant revenue for the company.

To protect yourself, negotiate specific use limitations. For example, banning political content, adult content, or any context that feels out of alignment with your personal or professional brand. You can also push for periodic reviews or even opt-out clauses that allow you to revoke permission under certain conditions.

Lastly, always ask for indemnification. If your likeness is used in a way that misleads users, spreads false endorsements, or damages your reputation, you shouldn't be left holding the bag. A solid indemnity clause ensures the company using your avatar is also responsible for keeping it in line.

For talent, these conversations may feel new, but they're becoming essential. As AI twins become more common, so too will the need for smart, protective contracts that honor both the human and the technology.

Regulatory Landscape and What's Coming

The legal framework around AI avatars and synthetic voice is still forming, but it's evolving fast. On the state level, several laws already hint at where regulation is headed. California's AB 730 prohibits the use of deepfakes or synthetic media in political ads within 60 days of an election. This law marks one of the first state-level responses to AI-generated likeness abuse.

Meanwhile, Illinois' Biometric Information Privacy Act (BIPA) remains one of the strongest in the country. It requires consent before collecting or using biometric identifiers like voiceprints or facial scans. BIPA has already been used in high-profile cases against major tech companies, and AI startups should take note.

At the federal level, momentum is building around both right-of-publicity reform and FTC scrutiny of deceptive synthetic media. The FTC has warned that AI-generated endorsements, voices, or representations, if not properly disclosed, could qualify as unfair or deceptive under existing advertising laws. The FTC can issue fines under Section 5 of the FTC Act and enforce civil penalties for noncompliance. Mentioning this would further underscore the seriousness of synthetic voice misuse.

What does this mean for companies building or deploying AI twins? The window for operating in a legal gray zone is closing. Compliance, transparency, and strong contracts aren't just best practices anymore. They're fast becoming baseline expectations.

ABC News: This real-world case highlights the growing legal scrutiny around synthetic voice use in public spaces.

Data Use & Privacy

Don't overlook the training data. It's one of the most common blind spots we see with AI companies, especially fast-moving teams racing to ship features. Pulling voice samples, facial scans, or likeness data from scraped sources or user uploads might already be crossing legal lines under laws like the CCPA, GDPR, or even fundamental contract law. Just because data is publicly available doesn't mean it's fair game. And if your training set includes minors—even accidentally—you could trigger COPPA restrictions, which come with severe penalties.

This isn't just a checkbox issue. Training data governance is quickly becoming a cornerstone of AI compliance. You need to know what went into the model, who owns it, and whether consent was ever part of the picture. The FTC, state AGs, and European regulators are all watching this space. Startups that get ahead of this—by documenting sources, securing rights, and building with permission—will have a real advantage when scrutiny hits.

Make Sure Your AI Twin Has a Legal Backbone

AI twins, avatars, and synthetic voices are no longer experimental. They're powering real products, real interactions, and real revenue. But with that scale comes a real need for legal structure. The laws around digital likeness are still catching up, but the risks are already here.

Whether you're training an AI assistant, licensing voice data, or scaling your brand through synthetic media, Traverse Legal helps you stay ahead. We work with AI founders, product teams, and legal counsel to lock down IP, structure smart contracts, and steer clear of publicity rights traps.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More