ARTICLE
14 April 2026

Artificially Generated And Modified Music: Legal And Ethical Considerations For Creators

GE
G ELIAS

Contributor

We are a leading Nigerian business law firm founded in 1994 and now organized across 18 practice groups, covering 25 industry sectors. We are also a member of Multilaw, a leading global alliance of independent law firms in over 90 countries worldwide.
Artificial intelligence (AI) is no longer a futuristic concept in the music industry, or in any industry for that matter. In at least the past two (2) years, we have seen AI used to produce songs, generate and modify vocals, and write lyrics.
Nigeria Intellectual Property
G ELIAS are most popular:
  • within Insolvency/Bankruptcy/Re-Structuring, Government, Public Sector, Food, Drugs, Healthcare and Life Sciences topic(s)
  • in United States

Introduction

Artificial intelligence (AI) is no longer a futuristic concept in the music industry, or in any industry for that matter. In at least the past two (2) years, we have seen AI used to produce songs, generate and modify vocals, and write lyrics. Entire albums are being generated in a matter of minutes using platforms like Suno and Udio. This tells us that the use of AI in the music industry is no longer experimental. AI songs are already available on Spotify, Apple Music, and other digital streaming platforms, generating revenue and entertaining fans. The question is no longer whether AI will affect music. It already has. The real question is how the law will respond and how creators can protect themselves. In this article, we explain what AI generated and AI-modified music means, the legal risks involved, the ethical considerations that arise, and what creators and businesses should be thinking about.

What Do We Mean by AI Music?

AI music can generally be grouped into three categories: (a) music created entirely by AI systems; (b) music created by humans with AI assistance; (c) existing music altered, remixed, or cloned using AI. The distinction is important because the law responds differently depending on the category. We deal with each of them below.

A. Fully AI-Generated Music

This refers to music produced with little or no human creative input. A good example is where an AI user simply types “Create a sad Afrobeats song in the style of Burna Boy” on Suno and the AI generates an entire song from scratch, mimicking Burna Boy’s style, including the lyrics, the melody, and the vocals. The user does nothing but enter a prompt. The human creative contribution is extremely low, if there is any creative contribution at all.

A real-life example is Breaking Rust, an AI-generated country music project. Its song “Walk My Walk” reached No. 1 on the Billboard Country Digital Song Sales chart in November 2025.1 This means the song was making bank for the brains behind the project. But if a system creates a song almost on its own, with only the aid of a prompt, we must naturally begin to ask questions about authorship and ownership.

Copyright law assumes that an author is human and creativity involves human judgment. The law will not credit as an author anything that is “non-human”. Recall the United States case of Naruto v. Slater2, which made news in 2018, where the court held that the photographs that a monkey had captured with a camera could not be protected by copyright. Since AI is not human, it can certainly not be the author/creator in any case.3

Who do we then credit as the author of fully AI-generated music? It is a very important question. Copyright cannot vest without an author.4 We cannot credit the AI user who only enters a prompt. We adopt the view of the United States Copyright Office that “prompts alone do not provide sufficient human control to make users of an AI system the authors of the output. Prompts essentially function as instructions that convey unprotectible ideas.”5 Simply entering prompts does not allow a person to determine the result, in any material respect.

The prompter has general expectations of what he thinks would be good. The result itself is a gamble.

We cannot also credit the AI developer. The developer’s contribution is not of a creative nature. We can equate it to providing the recording studio without more. That is not enough to confer authorship rights.

If copyright law struggles to recognise any of them as authors because there is no meaningful human creativity, then it may mean that fully AI-generated music falls into the public domain. That means anyone may be free to copy, distribute, or adapt it without permission. For businesses investing in AI-generated content, this creates serious commercial risk. They may not actually own what they think they own.

B. Music Generated by Human-AI Collaboration

This is where people use AI tools to aid their creative process rather than simply allowing the AI to do all the work. This covers situations where an artist writes lyrics, but AI suggests melodies, or where a producer creates a beat, but AI refines the arrangement. A good example is the song “I Run” by HAVEN, where Suno was used to manipulate the vocals of the creator (Harrison Walker) to sound like a female vocalist, although he composed the song.6

Legally, this does not destroy authorship. Since the human creator remains in control and exercises meaningful creative decision-making, the output will still be protected by copyright. In our view, using AI as a tool is not fundamentally different from using Auto-Tune, drum machines, or digital audio workstations. If using Auto-Tune does not take away authorship, then neither should AI when it has only been used as a tool.

Ultimately, however, it depends on the extent of the AI/Human contribution. If the AI makes most of the expressive decisions, the case for protection weakens. If the human curates, edits, and directs the outcome, then the case for protection is stronger. Of course, this is not something that can be determine with any mathematical accuracy. It would have to be determine on a case-by-case basis.

We should note that even where humans retain authorship, using AI as a support tool still presents other legal risks. AI systems are trained on music created by real artists, and if the output begins to mimic a recognisable voice or performance style, issues around privacy, personality rights, and publicity rights may come into play. For instance, the voice in “I Run” by HAVEN closely resembled that of Jorja Smith, leading to controversy and industry backlash after her record label alleged that the track used AI to imitate or “clone” her voice and mislead listeners into believing she performed it7.

C. Existing Music Modified with AI

This is where the legal risk increases significantly. This could be AI remixes of popular songs or AI-generated cover versions using cloned voices. A perfect example is “Papaoutai Afro Soul,” based on the original song “Papaoutai” by Stromae8. In these cases, there is an existing copyrighted work, which is transformed or remixed by AI.

AI modification can trigger multiple layers of infringement, including copyright in the musical composition and/or the sound recording9, performer’s rights10, and moral rights (for example, distortion or derogatory treatment of the work)11. If the output is recognisably derived from an existing work, permission is required. Failing to obtain licences can expose creators, streaming platforms, and distributors to infringement claims. Stromae has been silent since the AI-version of “Papaoutai” was released. Away from the public eye, he may well have granted permission for the remix or reached some favourable arrangement for the payment of royalties. If not, an infringement claim is not off the table.

The Training Data Problem

AI systems learn by analysing massive amounts of existing music scraped from the internet, but those songs are typically not used with the consent of the creators. Many artistes never agreed to have their work used to train the machines that now compete with them. This is one of the points where the legal and ethical tensions intersect. Whether training AI with copyrighted material is legal is itself still unsettled globally. Much of the debate focuses on whether training constitutes copyright infringement or whether it qualifies as fair use or fair dealing. The main point in the pro-fair use arguments is that the use is transformative. Supporters argue that training AI uses copyrighted works not to reproduce or distribute them, but to analyse patterns, styles, and structures in order to generate new outputs, and this makes the use fair.12 We think the conclusion is questionable.

There are four (4) factors a court will consider in determining whether use is fair: (i) purpose and character of its usage; (ii) nature of the work; (iii) amount and substantiality of the portion used in relation to the work as a whole; and (iv) effect of the use upon the potential market or value of the work13. The transformative use argument hinges only on the first factor (the purpose and character of the use) and ignores the rest. We agree with Justice Vince Chhabria, who, in discussing the issues in Kadrey v. Meta Platforms, Inc.14, held that “[t]here is certainly no rule that when your use of a protected work is “transformative,” this automatically inoculates you from a claim of copyright infringement.” It is not exactly fair if artistes’ entire creative catalogues are used to train systems that are now competing with and even replacing them in the market.

The Ethical Questions

Beyond the legal questions, the rise of AI in music also raises important ethical concerns for creators and the industry. These questions go to the heart of fairness, creative integrity, and respect for the identity and labour of human creators.

A. Creative Integrity

AI raises important questions about creative integrity. Does it dilute human creativity? Or does it expand it? And where does originality now sit? Platforms like YouTube allow AI-assisted content to be monetised, but they increasingly require transparency about synthetic media and impose rules around originality15. Legal or not, creators must think about whether using AI aligns with their artistic identity and brand.

B. Economic Impact

AI changes the economics of music. It makes production cheaper, increases supply, and makes for easier saturation of the market. One major consequence is reduced licensing opportunities. If businesses can generate “good enough” music instantly, they may no longer need to license existing songs or commission composers. That reduces income streams for ‘original’ creators. When AI-generated music becomes abundant, ‘original’ creators may become easier to ignore. This economic pressure is real, especially for independent artistes and composers, and raises concerns about whether we truly want to endorse AI to the fullest extent.

C. Identity and Personality Issues

AI can replicate voices, accents, and performance styles. This goes beyond copyright and touches identity, reputation, and personal dignity. Supposing imitation does not infringe copyright, is it ethical to release a song in someone else’s voice without their consent? These are questions that we still grapple with.

Globally, regulators are issuing guidance, debating AI-specific legislation, and reacting to court decisions. But the law is often reactive, and the technology always seems to move faster.

Practical Notes for Creators and Businesses

Waiting for the law to catch up is risky. Contracts and terms and conditions should determine ownership and usage rights before the laws and the courts do. If, for instance, an independent artiste or a label releasing music expressly states in their terms and conditions that their songs cannot be used to train AI, it becomes not just a matter of copyright infringement, but also breach of contract. Contractual protections are very important.

Furthermore, when enforcing rights, e.g., where an artiste’s work is used to train AI, suspicions are not enough. To get relief for infringement, you must first show that your work was used and that the use was infringing. It really is not a matter of sentiments. The evidence matters and it must be produced. This appears to be the major challenge of many plaintiffs in AI related litigation16.

To be on the right side of both ethics and the law, creators should document their creative process, obtain consent for voice and likeness usage, avoid imitating identifiable artistes, and address AI use explicitly in recording, publishing, and production contracts. Importantly, creators should continue developing authentic creative work. AI may assist, but originality remains a valuable asset.

Conclusion

Artificially generated and modified music presents both opportunity and risk. It challenges traditional ideas of authorship, ownership, fairness, and identity. For clients operating in the music space — whether as creators, investors, or distributors — the key is awareness and proactive planning. AI is not going away, and neither are the legal and ethical responsibilities that come with creation. Whether we like it or not, the future of music will involve both humans and machines. The task ahead is to ensure that innovation does not come at the cost of fairness, integrity, and respect for originality.

Footnotes

1. Madeleine Nolan, ‘“Art Should Be Human”: Nashville Pushes Back After AI Song Goes No. 1’ (CBS12 News, 14 November 2025) (https://cbs12.com/news/nation-world/art-should-be-human-nashville-pushes-back-after-ai-song-goes-no-1\) accessed 16 February 2026

2. No. 16-15469 (9th Cir. 2018)

3. US Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (US Copyright Office 2025) (https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf\) accessed 18 February 2026

4. Copyright Act, 2022, s. 28(1)

5. US Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (US Copyright Office 2025), p. 18 (https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf\) accessed 18 February 2026 

6. Ethan Beck, ‘The music industry is getting used to AI. One viral track went too far.’ (Yahoo Entertainment, 27 November 2025) (https://www.yahoo.com/entertainment/music/articles/music-industry-getting-used-ai-180420013.html) 19 February 2026

7. Laura Snapes, ‘Jorja Smith’s Label Requests Share of Royalties from “AI-Cloned” TikTok Viral Song’ (The Guardian, 1 December 2025) (https://www.theguardian.com/p/x3zaxx) accessed 16 March 2026; Liberty Dunworth, ‘Jorja Smith’s Record Label Hits Out at AI Track That “Cloned” Her Voice’ (NME, 1 December 2025) (https://www.nme.com/news/music/jorja-smiths-record-label-hit-out-at-ai-track-that cloned-her-voice-its-bigger-than-one-artist-or-one-song-3914613/) accessed 16 March 2026

8. Cate Cleo Alexander and Lauren Knight, ‘Could You Tell If Your Favourite Song Was Made with AI? The Viral “Papaoutai” Cover Controversy Suggests Not’ (The Conversation, 10 March 2026) (https://theconversation.com/could-you-tell-if-your-favourite-song-was-made-with-ai the-viral-papaoutai-cover-controversy-suggests-not-274607\) accessed 16 March 2026

9. Copyright Act, 2022, s. 9

10. Ibid, s.63

11. Ibid, s. 14(1)

12. See, for instance, the decision in Bartz v. Anthropic PBC, No. 24-cv-5417 (N.D. Cal. June 23, 2025), Dkt. No. 231

13. Copyright Act, 2022, proviso to s. 20(d)

14. 2025 WL 240847 (N.D. Cal. 2025) at page 3

15 .TubeChef, ‘Is AI YouTube Content Allowed? (Copyright, Monetization & Policies 2025)’ (TubeChef) (https://tubechef.ai/blog/is-ai youtube-content-allowed) accessed 16 March 2026; ‘YouTube Clarifies Monetisation Guidelines: No New Rules, Just Clearer Language’ (Adgully) (https://www.adgully.com/post/3899/youtube-clarifies-monetisation-guidelines-no-new-rules-just-clearer-language) accessed

16. March 2026 16 See, for instance, Kadrey v. Meta Platforms, Inc (supra n. 12) 

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More