ARTICLE
6 July 2023

Regulating Artificial Intelligence In India: Challenges And Considerations

On the question of regulating artificial intelligence, India's position is not yet clear. In April this year, the Ministry of Electronics and Information Technology had stated that the government did not have plans to ...
India Privacy

Introduction

On the question of regulating artificial intelligence ("AI"), India's position is not yet clear. In April this year, the Ministry of Electronics and Information Technology ("MeitY") had stated that the government did not have plans to introduce a specific law to govern the growth of AI in the country. By early June, however, it was clarified that the government would regulate AI after all – at least to protect digital users from harm, likely through the proposed Digital India Act ("DIA"). In this regard, it is possible that MeitY's re-articulated position has been influenced by developments in the European Union ("EU") – where parliamentary committees adopted a draft negotiating mandate (a compromise text) in May on establishing the world's first harmonized rules for AI systems – based, in turn, on the level of risk they pose to safety, livelihood, and rights. Further, members of the European Parliament adopted its negotiating position on the Artificial Intelligence Act (the "Proposed AI Act") on June 14, ahead of talks with EU member states on the final shape of the law, aiming to reach agreement by the end of 2023.

Although AI involves several sub-fields and various methodologies, AI policymakers mainly focus on automated decision-making or machine learning ("ML") systems, which are algorithmically controlled. Even narrowed down thus, significant regulatory challenges arise when advanced ML algorithms share important characteristics with human decision-making processes. For instance, there could be concerns about the potential liability of the underlying system, especially when data processing leads to harm. At the same time, especially from the perspective of those affected by automated decision-making processes, the increased opacity, newer capabilities, and uncertainty associated with the use of AI systems may lead to diverse new challenges – both legal and regulatory.

What is AI?

AI uses technology to automate tasks that normally require mature, human-like intelligence. In other words, when people perform the same tasks, they need to use various higher-order cognitive processes. While this definition is based on 'human' intelligence, the Organization for Economic Co-operation and Development ("OECD") had defined AI in 2019 pursuant to the latter's underlying technical traits – i.e., as a machine-based system that is designed to operate with varying levels of autonomy, and which can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.

ML

Techniques that infer patterns from large datasets – otherwise referred to as ML – are often bracketed under the label of 'AI'. Such AI/ML systems are able to produce useful and 'intelligent' results – albeit without human cognition. In other words, these systems are able to produce useful information through heuristics alone – i.e., by detecting patterns in data, and by using knowledge and rules which have been specifically encoded by human beings into forms that can be processed by computers. Nevertheless, such computational mechanisms do not resemble or match human thinking (or so it seemed until recently).

Thus, the mechanisms and technological approaches that allow AI to automate tasks fall into two broad categories: (1) ML, and (2) logical rules and knowledge representation. However, ML algorithms may also be able to program themselves because they can detect useful decisional rules on their own – especially when they examine greater volumes of data, where statistical outliers become more prominent – rather than having those rules explicitly laid out for them and/or being pre-programmed to do so by human beings.

Indeed, most major AI systems involve some degree of ML, including autonomous vehicles, predictive analytics, fraud detection, and automated medicine. However, the success of ML is parasitic on data abundance. Indeed, the rise of ML in recent times has been fueled by a surge in data availability via the internet, as more socioeconomic processes and institutions operate through the use of computers with stored, networked information.

While the OECD definition arguably excludes content-generation systems, the definition of AI within the EU's Proposed AI Act explicitly includes autonomous systems that generate content (in addition to predictions, recommendations, or decisions).

Current Developments

Even while recent MeitY statements suggest that AI may be regulated in India just like any other emerging technology (to protect digital users from harm), MeitY maintains that the purported threat of AI replacing jobs is not imminent because present-day systems, being essentially task-oriented, are not sophisticated enough – devoid, as they are, of human reasoning and logic.

Such claims are consistent with those made by AI commentators until about a year ago. For instance, researchers have argued that existing AI demonstrates a 'narrow' intelligence, comprising systems which are tailored for precise kinds of tasks only, with a particular set of characteristics. Further, most current AI technology tends to not be adaptable from one activity to other and/or unrelated activities. Thus, it was alleged that current AI technology mainly works for activities where there are underlying patterns, rules, definitive right answers, and formal structures. By contrast, AI works poorly in areas that are conceptual, abstract, value-laden, and policy/judgment-oriented; require common sense and intuition; involve persuasion and arbitrary conversation; or involve engagement with the meaning of real-world humanistic concepts, such as norms, constructs, and social institutions.

Nevertheless, recent generative AI applications – such as ChatGPT – are able to follow instructions, process human prompts, and write text. Representing one of the biggest leaps in AI history – ChatGPT is an example of large language model ("LLM") technology. Such generative AI applications are typically built using foundation models. These models, in turn, contain expansive artificial neural networks, like neurons in the human brain. Such foundation models are thus part of 'deep learning' – enabling several of the recent advances in AI technology. However, the new models that generative AI applications rely on represent a revolutionary change even within the domain of deep learning. Unlike previous models, these can process extremely large and varied sets of unstructured data, and are capable of performing more than one task.

Accordingly, a recent report suggests that generative AI, accompanied by an exponential rise in hitherto-limited computational power, is poised to transform society and improve performance across diverse areas of the economy – such as sales and marketing, customer operations, software engineering, along with research and development. In this process, AI's unprecedented impacts on productivity could add immense value across the global economy. Further, generative AI has the potential to automate activities that represent traditional work time. This accelerated potential has emerged on account of generative AI's ability to understand natural language. However, while generative AI can increase labor productivity across all sectors, it will require investments to support workers as they shift activities and/or change jobs.

Acknowledging such developments, India has launched a national program on AI ("NPAI") to harness the potential benefits of AI and similar transformative technologies. A national AI portal – 'INDIAai' – acts as a content repository in this regard. Based on a series of roundtable discussions on generative AI – including in respect of legal challenges and ways to mitigate harm – a report was published in May 2023 (the "Gen AI Report") that focuses on economic impacts and other important consequences.

The Gen AI Report

The legality and ease of using copyrighted material for training AI models is still unclear. Further, existing intellectual property laws may be ill-equipped to address AI-generated creations and works (even if they stem from human prompts). Nevertheless, consistent with MeitY's general stance, the Gen AI Report maintains that all generative AI regulations should, at a minimum, protect individuals against harm. Such harms may include violations of privacy and breaches of data protection rights; discrimination with respect to accessing services; as well as exposure to false and/or misleading news/information. Further, second-order harms may include violations of intellectual property rights ("IPR").

India's Stated Position

Pursuant to a March 2018 taskforce report and a discussion paper about India's national AI strategy released later that year, NITI Aayog had deliberated upon the advisability of regulating AI. Accordingly, in a February 2021 approach document, it proposed certain overarching principles for 'responsible AI' development (the "Responsible AI Report").

At present, acknowledging the emergence and deployment of AI across sectors such as education, manufacturing, healthcare, and finance, the Indian government considers AI to be a 'kinetic enabler' for the growth of the digital economy, including in respect of investments and jobs. Thus, AI deployment may continue to be governed through laws related to privacy, data protection, intellectual property, and cybersecurity.

At present, it appears that MeitY is keen on harnessing the benefits of AI, as the government continues to champion its 'Digital India' drive. That apart, various other developmental initiatives involving AI have been undertaken by the government, including in terms of skilling and capacity-building; focusing on the health sector, defence and defence products; agriculture; as well as with respect to international cooperation.

Key Lessons

As pervasive and cross-sectoral digitalization trends demonstrate, a worldwide digital transformation is currently underway, assisted by the increasing deployment of AI. However, while data protection laws do focus on digital interactions, the former prioritize personal data and privacy alone. Nevertheless, anonymized and non-personal information also come into play where AI is concerned, including with respect to machine-generated data. Accordingly, in addition to data protection laws, several other legislative spheres may assume salience in the future, including those associated with telecommunications, competition, IPR, and product liability. Further, as AI gets applied across the wider economy for the purpose of operations and governance, regulatory regimes connected with medical practice, logistics, infrastructure, manufacturing, financial markets, banking, government procurement, and several others may need to be modified from time to time.

Lessons from Germany

In 2017, the Indian Supreme Court had held that a right to privacy was integral to freedoms guaranteed through fundamental rights under the Indian constitution. In Germany, back in 1983, the Federal Constitutional Court ("FCC") had elaborated, more specifically, a 'fundamental right to informational self-determination' in response to privacy risks associated with emerging digitalization. Twenty-five years later, the FCC articulated a fundamental right to confidentiality and the integrity of information technology ("IT") systems. Further, in 2016, the FCC held that the protection afforded to IT systems includes networked computers (used by individuals) in connection with cloud storage (including data stored on external servers), online tracking, etc. Thus, the use of AI associated with such networks may fall within the scope of this newly articulated fundamental right. If India witnesses similar jurisprudence in the future, the trajectory and contours of AI deployment may substantially change.

Lessons from the EU

AI-specific rules, when framed in isolation, may prove inadequate as long as they remain detached from the sector-specific contexts in which AI gets deployed. While India's new digital data regime – including the proposed Digital Personal Data Protection Act ("DPDP") and the DIA, respectively, may apply to AI during digital data processing in general, such laws might also be able to provide an initial template for designing robust safeguards in other regulatory regimes.

For instance, in Europe, Article 35(1) of the EU's General Data Protection Regulation ("GDPR") requires prior impact assessments in cases of high-risk processing, especially when new technologies are used. Further, GDPR's Article 35(3)(a) specifically mandates such impact assessments when systematic and extensive evaluations of personal aspects related to natural persons are to be performed, based on automated processing, including through the use of profiling and decision-making technologies.

Somewhat similarly, Clause 11(2)(c) of the current draft of India's DPDP (dealing with additional obligations of 'significant data fiduciaries') also requires impact assessments and periodic audits to be performed by certain classes of notified entities that process high volumes of sensitive data – especially when there exist risks of individual harm, public interest, or national security.

Further, GDPR relies on the possibility of 'regulated self-regulation'. For example, it encourages associations or other bodies to draw up codes of conduct to ensure that GDPR is implemented effectively (Recitals 77 and 98). This may include codes about the use of AI as well, at least to the extent that a personal data protection law (such as GDPR) applies to it. Further, GDPR's Article 41 provides for the possibility of accreditation by appropriate bodies for the purpose of monitoring compliance with such codes of conduct. EU member states are also encouraged to establish certification mechanisms and data protection seals and marks (Article 42). Corresponding legal measures may be developed outside GDPR as well.

While certification by publicly-accredited or government bodies, especially with respect to high-risk developments and/or deployments of AI, can be provided for – to the extent that certification is voluntary (as is customarily the case), it makes sense to create incentives for enforcement, such as by exempting or limiting liability, e.g., for robotics under product liability law. In sensitive areas, certification can also be made mandatory.

The integration of AI introduces unique challenges that require careful governance. As AI systems become more sophisticated and autonomous, traditional concerns with respect to accountability, bias, and potential risks to both individual rights and societal well-being, may persist. Complex AI algorithms, often referred to as 'black boxes', may produce decisions that even their developers struggle to fully understand. Consequently, calls for transparency in AI decision-making algorithms may grow louder still in coming years.

Lessons from Japan

Countries which adopt a different approach to regulating AI (relative to the EU) may also provide a good reference point. For instance, rather than imposing binding regulations on AI systems, Japan focuses on voluntary efforts made by companies towards AI governance. Much like NITI Aayog's Responsible AI Report, the AI Governance Report published by the Japanese Ministry of Economy, Trade, and Industry proposed agile governance and non-binding guidance for the purpose of supporting a responsible AI regime. The thinking behind such trends is to avoid a rigid law that ends up stifling innovation.

Japan's two-pronged approach with respect to AI governance comprises a balance between regulation and promotion. On the one hand, the country has enacted and/or amended sector-specific laws to govern the use of AI across industries. For instance, the amended Financial Instruments and Exchange Act, 2006 regulates algorithmic high-speed trading, while the Antimonopoly Act, 1947 now covers concerns associated with unfair trade when conducted by algorithms. Further, the Act on the Protection of Personal Information, 2003 ("APPI") recently introduced the concept of pseudonymized information to make obligations with respect to handling such data less onerous, thus encouraging businesses to ensure that input data used in AI systems is appropriately pseudonymized.

On the other hand, Japan has also undertaken regulatory reforms across sectors to promote the use of AI. For instance, amendments to the Road Traffic Act, 1960 and Road Transport Vehicle Act, 1951 allow manufacturers to produce self-driving vehicles. Similarly, amendments to the Installment Sales Act, 1961 allows credit card companies to use AI for the purpose of determining user credit ratings. Further, recent amendments to the Copyright Act, 1970 allow the use of data in ML, and similar amendments to the Unfair Competition Prevention Act, 1934 protect artists and creators by ensuring that underlying datasets are sold to developers for an appropriate fee.

Thus, while the EU is presently focused on formulating binding regulations to govern AI systems, Japan adopts a more flexible approach, encouraging voluntary efforts and providing non-binding guidance.

Conclusion

By implementing a multi-pronged framework, India can foster a responsible AI trajectory for itself. Such a trajectory may encourage innovation and mitigate risks without compromising individual rights and privacy. Indeed, since AI produces consequences involving national security and foreign policy too, the ability to harness AI effectively may determine the global balance of power over the next few decades.

This insight/article is intended only as a general discussion of issues and is not intended for any solicitation of work. It should not be regarded as legal advice and no legal or business decision should be based on its content.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More