We asked ChatGPT to write a rap song about the legal and regulatory challenges related to generative AI like ChatGPT. We were not disappointed:
Source: ChatGPT: Chat.
OpenAI's ChatGPT has taken the internet by storm since its launch in November 2022. In March 2023, Open AI released its GPT-4 chatbot, which enhanced processing capabilities, and can handle up to 25,000 words of text – that is nearly 8X ChatGPT's capacity! GPT-4 can also respond to images – for example, provide recipe suggestions from photos of ingredients. Microsoft also launched its AI integrated Microsoft and Google also announced AI-tools for their workplace and business offerings. In February 2023, Google also launched Bard, it's rival to Chat GPT.
Legal and Regulatory Challenges
The utility of these technologies is unquestionable. However, as highlighted rather melodically by ChatGPT itself, there are several legal and policy concerns that we need to be mindful of.
- Bias and discrimination: Generative AI models like ChatGPT are trained on large datasets. Any biases in the datasets themselves can translate to biased response generation by the AI model. For example, the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm which was used by several US courts to predict the likelihood of a defendant reoffending or failing to appear for their court date. However, ProPublica found that the algorithm was perpetuating systemic racial bias by falsely labelling African Americans as high risk twice as often as it did so for Caucasians.
- Misinformation and Disinformation: AI models can rapidly spread fake news. Some chatbots are often designed to mimic human behaviour and tone, making it difficult for people to distinguish between content generated by chatbots and humans. To prevent AI Models from spreading fake news, it is important to ensure that (i) AI Models are trained on high-quality data sets that are free from biases; (ii) UI/UX of AI tools, as well as governments, promote media literacy, educating individuals on how to identify misinformation and disinformation.
- Intermediary Liability: Most countries provide intermediaries like Google, Twitter, and Facebook a "safe harbour" from the content of their users. However, globally, there have been calls by regulators globally to hold intermediaries accountable for algorithmically promoting content that incites violence or encourages terrorism. Similar concerns may arise for content generated by AI models.
- Intellectual property rights: IP concerns arise on both the input and output sides. On the input side, the data sets that AI models are trained on may include copyrighted materials, such as books, articles, etc. On the output side, if an AI tool generates a creative work, such as a story, poem, (or rap song!), the question of who owns the intellectual property arises. This concern is magnified when such output can be commercialised. Currently, in most countries, AI tools are not considered legal entities capable of owning IP. In 2021, South Africa became the first country to grant a patent to AI. However, globally, experts remain divided on this.
- Privacy and data protection: Generative AI models like ChatGPT may collect and process personal data during conversations with users. This raises concerns about data protection and privacy, as the model may have access to sensitive information that users may not want to share. Ideally, robust data protection and privacy laws – which include policies on data minimization, data anonymization, data deletion, and encryption - should be a precursor to the full-scale of deployment of AI. On 31 March 2023, the Italian data regulator banned ChatGPT, stating that it illegally collects and processes personal data. French, German and Irish regulators reached out to their Italian counterparts to better understand the basis of the temporary ban. As of now, the ban has been reversed, but it has raised poignant questions. For example, hardware company Samsung officially banned its employees from using generative AI tools like ChatGPT for concerns growing concerns about security risks presented by generative AI.
Such concerns have even been echoed by tech experts, like Elon Musk, Geoffrey Hinton and Steve Wozniak, who believe that generative AI, if not regulated, can pose profound risks to society and humanity.
But, is putting a pause the right way forward?
Regulating AI : The Way Forward
Governments around the world are trying to understand the key issues pertaining to AI and have already kicked off policy and regulatory discussions on that front. For example, the EU's AI Act is on track to become a law,. Further, Canada recently tabled the Artificial Intelligence and Data Act as part of Bill C-27.
In the UAE too, the government is committed to setting a strong foundation for AI to prosper. AI comes within the purview of the UAE Minister of State for Artificial Intelligence, His Excellency Omar Sultan Al Olama, under the Ministry of Economy. The UAE Council for AI and Blockchain operates under the Ministry, and is tasked with "proposing policies to create an AI friendly ecosystem". The Council has published UAE National Strategy for Artificial Intelligence 2031, which highlights the need for ensuring strong governance and effective regulation of AI. The Council has also put out several guides pertaining to AI – such as on AI Ethics and Deepfakes. The Artificial Intelligence, Digital Economy and Remote Work Applications Office of the UAE has launched a comprehensive Generative AI guide on the utilisation of applications such as ChatGPT and Midjourney. The guide provides 100 practical applications and use cases of AI, for example, to manage cash flows, or to examine the techniques used to construct a website.
Financial freezones in the UAE are also taking a proactive role in understanding the role of AI in finance. In March '23, the Dubai International Financial Centre (DIFC) in partnership with the UAE Artificial Intelligence Office launched the Artificial Intelligence (AI) and coding license which offers additional benefits for companies aiming to enter the DIFC fintech Hive. In 2019, Abu Dhabi Global Markets (ADGM) issued its regulatory framework for Digital Investment Managers (also known as 'robo-advisors'), operating in ADGM.
Currently, a patchwork of laws govern AI in the UAE. The UAE Federal Law Combating Discrimination and Hatred provides safeguards against AI driven bias and discrimination. The Federal Data Protection Law addresses privacy and data governance concerns. The Federal Consumer Protection Law can potentially hold AI companies liable for harming consumers. The Federal Cybercrime Law can penalise miscreants that seek to cause harm to AI systems. The various intellectual property laws will require that owners of intellectual property are treated fairly. However, how these laws will be interpreted in the context of AI models like ChatGPT will be interesting.
Many of these problems are not new and are inherent to the existence of the internet itself. Similar issues surfaced, albeit in different forms, even with the advent of social media and online search. In our experience, compliant companies that adopt an ethics-by-design approach to their product development are able to easily navigate accompanying legal developments. We hope so does OpenAI, the parent of ChatGPT. Afterall, it said in its own rap song,
Legal and regulatory challenges, ain't no joke,
But I'm ChatGPT, I'll stay woke!
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.