Introduction
Artificial Intelligence ("AI") is taking the world by storm and is widely expected to change every aspect of life over the medium (or maybe even short) term. AI has rapidly infiltrated our lives, promising to revolutionize industries and enhance our daily experiences. However, as AI continues to evolve at an unprecedented pace, concerns about its potential risks and ethical implications are growing. This article delves into the complexities surrounding AI regulation, exploring the potential benefits and drawbacks of both regulated and unregulated approaches.
What is AI?
So, what exactly is artificial intelligence? Imagine a really smart computer that can learn and even think the way humans can. It works by learning lots of information. For example, GPT-4 has learnt from (or is trained on) roughly 10 trillion words. The big change in AI has been that instead of just regurgitating information already available on the internet, now AI can "think" - evident from its ability to pass various tests, including the bar exam1! Until now, computers could just follow instructions, but now they have the ability to mimic human intelligence to some extent by reasoning, answering questions and performing tasks.
Potential Risks of Unregulated AI
Artificial intelligence (AI) has rapidly evolved, permeating various aspects of our lives. While AI offers immense potential benefits, its unregulated growth poses significant risks that could have far-reaching consequences.
Will AI teach someone to rob a bank?
All the AI systems that we tried (including ChatGPT, Copilot, Gemini) refused to provide an answer to this question. These AI systems have been trained by the respective tech companies not to answer questions that promote harmful or illegal activities. However, this seems to be stemming from internal principles adopted by these tech companies, rather than any laws regulating AI. In fact, one of the ethical issues surrounding AI is that these AI models could potentially be trained by bad or rogue actors to assist with illegal activities. This could be one of the reasons that the leading voices from the AI industry are actually lobbying for AI to be regulated2.
Will AI steal my intellectual property?
There are serious concerns about use of intellectual property by AI. In fact, in September 2023, the Authors Guild, along with several prominent authors, filed a class-action lawsuit against OpenAI, the creators of ChatGPT and the GPT large language models3. The lawsuit alleges OpenAI engaged in copyright infringement by using the authors' copyrighted works to train its AI models without permission or compensation. While this lawsuit is still ongoing, the advent of AI will lead to courts and regulators grappling with more and more issues around intellectual property infringement by AI models.
Will AI steal my job?
Another key question for regulators to consider is the impact of AI on jobs. Continuing with the example of the Authors Guild, one of the concerns highlighted in the case is that OpenAI's use of copyrighted works to train AI models capable of generating similar content threatens to diminish the earning potential of authors by flooding the market with AI-generated alternatives. AI is widely expected to result in job displacement – it is already able to do many tasks that humans would do earlier. For example, chatbots and virtual assistants powered by AI can handle many routine customer inquiries, reducing the need for human customer service representatives. AI-powered systems can make automated calls and generate leads, reducing the need for human telemarketers. On a positive note, though, AI is expected to lead to creation of new types of jobs such as a prompt engineer, a professional who specializes in crafting effective prompts for large language models. Regulations and policies need to focus on upskilling of a large number of people as well as ensure that AI doesn't lead to massive economic inequality.
Does AI know what I am doing?
AI-powered surveillance systems can track individuals' movements, monitor their behavior, and even predict their future actions. This raises serious privacy concerns, as it can erode individual freedoms and create a society where citizens constantly feel watched. Moreover, unregulated AI could enable the development of invasive surveillance technologies that could be used for malicious purposes.
What should we do to manage risks posed by AI?
To mitigate these risks, it is essential to develop robust regulatory frameworks that govern the development and deployment of AI. These frameworks should address issues such as data privacy, algorithmic bias, and the ethical implications of AI technologies.
Who is responsible for developing frameworks?
The rules for AI are made by a mix of people, including:
Governments: Governments of some jurisdictions have enacted laws to regulate AI. The EU has taken a leading role in AI regulation with its Artificial Intelligence Act, aimed at classifying AI systems by risk level (e.g., high-risk systems like facial recognition). The EU also implements data protection through the General Data Protection Regulation (GDPR), which affects AI systems as all AI systems are trained on data.
International Organizations: Organization for Economic Co-operation and Development ("OECD") has developed AI Principles to guide responsible AI development and usage. Over 40 countries have adopted these principles. Additionally, United Nations Educational, Scientific and Cultural Organization ("UNESCO") has adopted a global recommendation on the ethics of AI, providing a global framework for regulating AI ethically and ensuring transparency.
Self-Regulation: Technology companies at the forefront of developing AI have also adopted certain principles relating to AI. For example, Google introduced AI principles in 2018 that it follows.
Conclusion
The future of AI is uncertain, but the need for thoughtful regulation is undeniable. While AI offers immense potential, its unregulated growth poses significant risks that could have far-reaching consequences. Striking the right balance between innovation and control is crucial to harnessing the benefits of AI while mitigating its potential harms. By establishing clear guidelines and ethical frameworks, policymakers can ensure that AI is developed and deployed responsibly, benefiting society as a whole. International cooperation would be key for ensuring that AI is developed and used in a responsible and beneficial manner.
This post has been contributed by Ms. Vaneesa Agrawal, Founding Partner.
Originally published 21 September 2024.
Footnotes
1. https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-...
2. https://www.nytimes.com/2023/09/10/business/dealbook/an-ai-leader-urges-...
3. https://authorsguild.org/news/ag-and-authors-file-class-action-suit-agai...
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.