OpenAI, revered for its cutting-edge technology and leadership in the field of AI, unveiled "Sora", a groundbreaking text-to-video generator, just a few weeks ago. This innovative tool is capable of crafting videos of up to 60 seconds in duration, driven by written prompts and powered by generative AI. Sora's debut marks a monumental leap forward in the landscape of AI-driven content creation, ushering us into a realm poised for the advent of Artificial General Intelligence (AGI). Merely a month and a half ago, Google's Lumiere unveiled a similar concept, setting the stage for OpenAI's foray into this exhilarating technological domain, igniting a fervent buzz within the industry.

However, alongside Sora's remarkable array of user benefits, it also brings forth a set of intellectual property, privacy and data protection concerns. This dynamic landscape is characterised by a rollercoaster of legal challenges, replete with lawsuits, uncertainties and nuanced legal implications, poised to unfold as never before.

Data protection and privacy

Sora will enable the creation of videos, which could lead to mass production of synthetic content like deepfakes. Even though Sora is still not publicly available, with OpenAI reiterating that it wants to put some safety guardrails in place before making it public, it brings advanced issues regarding rights in likeness and voice (which Sora will not initially have but will probably be available to users at some point in the future).

The technology also raises ethical questions, particularly around the creation of deepfake videos or misleading content. In this matter, Sora's users will not be able to generate videos showing extreme violence, sexual content, hateful imagery or celebrity likenesses. There are also plans to combat misinformation by including metadata in Sora videos that indicate they were generated by AI; however, it is still unclear in which manner this should be regulated. In this aspect, we expect that the EU AI Act will come into play with transparency obligations for the providers of generative AI systems and impose watermark or different labelling obligations for systems which, like Sora, can create synthetic content.

Other concerns relating to mimicking likeness and voice may also risk reputational harm or legal actions, such as fraud or defamation. It is also questionable whether consent has been obtained from the people whose name, image, likeness or other personal data were used for training or whether there is any other legal ground stemming from the GDPR for processing personal data collected to train models like Sora. The use of text-to-video AI models like Sora and Lumiere will doubtless raise many privacy concerns and likely lead to a high number of privacy-related claims and court cases.

IP issues

The resolution of IP issues surrounding Sora remains ambiguous, primarily due to the absence of established legal precedents in this industry. However, it is imperative to recognise that Sora is not exempt from the typical IP-related challenges encountered by all AI technologies.

Akin to its AI counterparts, Sora undergoes training on expansive datasets, often scraped from the internet at large. This practice introduces considerable legal uncertainty regarding whether the content employed in AI training, as well as the resulting outputs, result in the infringement of IP rights. It is conceivable that Sora has been trained using copyrighted materials owned by third parties. Given Sora's capability to generate lifelike video content and even simulate entire video games, there exists a tangible risk of inadvertently producing materials that infringe upon earlier copyrights. OpenAI is already facing several proceedings on account of IP infringement, including lawsuits alleging copyright infringement and other intellectual property issues, such as those initiated by the New York Times, the Authors Guild, Raw Story Media, Intercept Media and others.

Of fundamental concern is the liability of generative AI developers, service providers, customers and end-users for IP infringement. Courts have yet to rule on how we should apply the existing copyright rules to AI training processes. Is it appropriate to hold such entities accountable for IP violations, and if yes, which entity in particular? Does the use of copyright materials for AI training purposes falls under an exception? Is any compensation payment due for such training? Should infringing models or outputs be destroyed?

These questions are not much different from those we have already been asking over the past year and a half. However, with Sora we might see a more voluminous infringement of other IP materials that were not seriously utilised in earlier AI models, such as photographs, trademarks and designs. Bits and parts of Sora-created materials made publicly available seem to indicate that Sora heavily relies on the use of trademarks and designs (for products, scenery shops, etc.). Furthermore, with the integration of sound and music elements into Sora's capabilities, fresh concerns pertaining to copyright protection and sound trademarks are poised to emerge. Finally, photographers may contend that segments of Sora-generated videos encroach upon their rights. While OpenAI has not divulged the precise origins of the data used to train Sora, it has disclosed its utilisation of publicly available videos licensed from copyright holders and anticipated to address these concerns through licensing agreements or alternative contractual arrangements with intellectual property rights holders.

The question of authorship raises equally significant questions. Prevailing legal norms stipulate that only natural persons may be deemed authors, and in the current legal landscape, it is highly unlikely that AI-generated output can attract copyright protection. In practice, OpenAI will most likely (if it is the same case as it is with ChatGPT) assign you all rights to the output created through AI. But such outputs may not qualify for copyright protection, which raises questions about the possibility of enforcing any copyright in this context.

Conclusion

Sora is still not publicly available and is expected to undergo a few procedures before becoming available to the general public. As with other AI tools, not only is Sora not safe from legal issues, but it may be even more open to potential lawsuits, as it seems to rely even more heavily on IP protected materials (not just copyright, but also trademarks and designs). As we await decisions on the pending litigations, AI is rapidly evolving, and legislators need to closely follow the developments and act decisively to safeguard the interests of both users and IP owners. Establishing guidelines and safeguards to prevent misuse will be essential for maintaining trust in the technology and being beneficial to society. The era of AI is indisputably upon us. Whether you like it or not, it will evolve over time and has already become part of our reality.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.