In an era where the boundary between man and machine continues to blur, entertainment media fans are finding themselves at the frontier of a fascinating phenomenon – the production of new media featuring their favorite artists, courtesy of artificial intelligence (AI).

The Beatles, perhaps the most-loved rock 'n' roll band of all time, released their final studio album, Let It Be, in 1970. Beatlemaniacs have listened to this and other Beatles albums, know most lyrics by heart, and have spent countless hours pondering the magic that sparked when John, Paul, George, and Ringo came together. Yet, these fans have long assumed that there will be no new Beatles music.

Until now.

In a recent interview, Paul McCartney revealed that he used AI to help complete a new Beatles song set to be released later this year. The new tune (described by McCartney as "the final Beatles record") was made using a demo recorded by John Lennon shortly before his death in 1980, which some speculate to be the composition "Now and Then".

Using AI to generate songs is not the first advancement that enables fans to experience their favorite musical acts in different ways, however it has the potential to supercharge the future of music production. Up until recently, the primary way for devotees to hear new material by certain artists would be if they released a special collection of songs that may have been accumulating dust in a (digital) warehouse.

Recent innovations have enabled fans with other ways to reminisce, such as by experiencing holograms of deceased artists that are incorporated into live concerts. Additionally, as legacy acts of certain genres fade away, patchwork lineups as well as tribute bands have emerged to fill the void. Close your eyes at a Dead & Company or Joe Russo's Almost Dead show and you may imagine yourself in a theater decades ago, swaying with the crowd as Jerry Garcia's guitar fills the air.

Now, the advancements in AI technology have unlocked a new realm of possibilities, one where the essence of an artist's music can be distilled and used as a blueprint to generate brand new tracks, ones that aren't an impersonation or a cover but rather an AI's interpretation of the band's unique style. By incorporating familiar chord progressions, lyrical styles, and rhythmic patterns, the new music captures the essence of the artist's music, and unlocks the possibility where a fan's creative journey with their favorite artists evolves with an ability to constantly experience new material.

But what if a computer, by itself, is not capable of churning out hits, let alone songs that even die-hard fans want to listen to? In his 2017 book "Hit Makers," Derek Thompson examines why certain media become popular, and posits that a song becomes a hit as a result of careful planning, market research, and an understanding of public sentiment.

Enter AI, again, to significantly consolidate these processes. Earlier this year, researchers from Claremont Graduate University used neuroforecasting – applying machine learning techniques to brain response data – to show that the neural activity of the small sample of study participants can predict, with 97% accuracy, whether millions of other listeners will like a given song.

The downstream uses of this innovation appear to be prodigious. For one, streaming services could use results of this technology to significantly boost their algorithms that recommend material to their subscribers. Perhaps further down the road, AI could generate songs based on a listener's neurophysiological responses, creating hyper-personalized streams tailored to individual preferences and emotional states, ensuring that the listener always has music that suits their mood or preferences.

Before an AI can start projecting its Grammy total, though, the Recording Academy recently announced that a work that contains no human authorship is not eligible in any category, but noted that a song that features elements of AI are eligible as long as a human creator is responsible for a "meaningful" contribution to the music and/or lyrics, which seems very similar to the AI contribution standard that the USPTO is currently wrestling with.

Music is not the only form of media in which AI is being increasingly employed. During the first 25 minutes of Indiana Jones and the Dial of Destiny, I was surprised to see octogenarian Harrison Ford appear to be in his 40s, a feat accomplished at least in part by AI. Certain AI techniques have already been used to synthesize the voice of deceased actors, and we will likely experience an increasing amount of movies and shows that incorporate lifelike portrayals of deceased or younger versions of actors.

Of course, these uses of AI in creative industries raise complex IP questions that courts and lawmakers will have to grapple with, most notably who rightfully owns a work created using AI. Authors and other creatives have formally urged AI companies to stop using their work to train AI models without permission or compensation. Further, established individuals and the estates of deceased artists will likely see an increased need to protect any applicable right to publicity. Moreover, one point of contention in the Screen Actors Guild strike is studios' ability to use digital, AI-generated replicas of actors in future productions.

In the meantime, it is fairly easy to envision a day in the near future where you are fed a segment of morning news that is perfectly tailored to your interests, listen to the new AI-generated Taylor Swift album while at lunch, and head to the theater to see the Mission: Impossible prequel about a young Ethan Hunt earning his IMF badge, portrayed by twentysomething Tom Cruise.

July 19, 2023

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.