"Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." -- Stephen Hawking1.
Billions of dollars are being raised and spent in developing new products and services based on AI. Even though AI has started impacting our lives e.g., through data analytics that can predict our likes, dislikes, buying behaviour, etc, however the technology is still in its infancy. Exciting products like driverless cars or new ways of diagnosing and treating illnesses are being developed.
In the midst of these revolutionary changes, some very senior industry leaders have raised the issue of regulating AI. It is seen as a technology with immense benefits and risks. Frequent comparisons with nuclear energy and nukes brings out the apprehensions. Fears have been expressed that AI guiding weapon systems can be scary. Similarly, other ways of abusing the technology will have disastrous consequences. However, there are no proposals in the public domain about the regulatory agenda from the developers of AI, especially the large tech companies.
Some players have articulated key principles to be kept in view while designing AI systems, in a way, a type of a self-regulation. Principles include fairness, transparency, inclusiveness, reliability, safety, privacy, security, etc.
Can the Governments strike the right balance between their roles of promoting and policing Artificial Intelligence
It is widely believed that adoption of digital technologies has improved the standard and quality of life across the globe. Some countries have reaped greater benefits by becoming home to some of world's most successful digital companies and also creating an economic eco-system that positions them for further growth and dominance. This is reflected in the recent paper on Artificial Intelligence published by EU, where status of development in EU has been compared with China and United States.
Some commentators have said that the internet's growth has come from it being open and free. This has incentivisied and encouraged entrepreneurs and innovators to invest, take risks, and develop products and services for consumers. One point of view is that a good way of ensuring that AI grows is by not regulating it.
While the application and challenges from AI development are global in nature, however the regulatory approaches taken by countries vary.. As an example, EU took the lead in formulating and enacting a detailed regulation on data usage and privacy in the world. In contrast US is seen as a country that follows the principle of light touch regulation.
Some countries are in the process of designing guidance documents as general principles of governance. The other alternative to this is to adopt a more ex post reactionary approach and develop regulation on the go as responses to practical interactions. The trade-off is between addressing the potential harms vs possibly stifling development of these technologies.
The current debate of facial recognition software reflects the dilemma. Money and time have been spent in developing and deploying these technologies. However, there are concerns about the purpose and safety of its usage. Some cities have banned the use of facial recognition software.
The question is - What would make more sense from both a public trust and a business predictability point of view ?.
Attribution of Liability
The challenges imposed by machines and robots are to a large extent the same as those by humans, for which regulatory frameworks are already in place. As an example, under consumer protection laws manufacturers, distributors, suppliers and retailers are held responsible for any harm that the products may cause. In some jurisdictions, manufacturers may be liable for damage (death, personal injury or damage caused to private property) in case a product or any of its component parts are found to be defective. Possibly, some of the existing laws and regulations on liability could be used with or without suitable modifications to mitigate any apprehensions/ risks from the implementation of AI.
Under consumer protection laws, one of the circumstances under which the manufacturers are often not held liable are when the defect did not exist at the time of the product being put under circulation. Related to that is the issue as to whether the defect is to do with the way a product was designed or the way it has been used.
This raises the issue of attribution of liability in case of a harm by an AI system. This ideally should be the same as what would be expected of a human being. e.g. a driver causes an accident, he will be liable for this. In the same manner, a driverless car causing an accident should also attribute liability for accidents that were foreseeable. The additional issues here would be:
- distribution of this liability amidst: the developer of technology, the owner of the car and any third party user/ operator.
- affording a legal personality status to machines and robots akin to human beings in the law
Legal personality for AI applications
Unpredictability of AI applications is probably perceived as the biggest challenge in regulating AI. It stems from the unpredictability of automation. It is possible that AI application takes decisions or leads to outcomes that are outside the purview of the design or the instruction of the operator of the application.
Therefore a question that has been raised is whether some types of AI application have a legal personality that is distinct from its creator or operator. Such a principle has been used in creating companies that own assets and take liabilities distinct from its owners, managers and employees. This structure has facilitated greater economic risk taking and thereby higher levels of economic activity.
However, it may be kept in view that in case a company indulges in criminal conduct, then often its employees and directors are held accountable for that. Even if AI were to have an independent legal personality, some of the liabilities especially on account of criminal action would need to travel back to human beings involved in its design and deployment. There is a slight distinction that exists here as well because where companies are fictitiously independent, AI may be fully autonomous. Closely linked to this is the ability of AI to enforce contractual relationships. Existing contract laws and IT laws would have to be brought up to speed here to incorporate such type of contracts. Then there are issues like nationality to be given to a robot, if a distinct legal personality were to be defined. Sophia, the Saudi Arabian humanoid comes to mind here. As of the time of writing this paper, there are no serious proposals on the table on providing a distinct legal identity to AI applications In the alternative, it may be possible to navigate existing consumer protection, contracts, and IT laws to arrive at best possible solutions for AI functions.
The recent paper on AI by EU argues that data would be the raw material for development and operation of AI applications. Asymmetry in current access and title to data to all potential developers of AI has been identified as a key issue. This is in a way a competition issue and the question is whether significant barriers to entry exist for developing some of the applications and how this would be overcome. Comments to the same effect have been made in the draft E-commerce policy published by the Indian Government.
Similarly, there are issues around IP rights for various algorithms that are developed and how the stance of regulators while considering IP applications impact the ability of small players to develop applications and scale up. Again such issues could be potentially addressed through IP laws in how they would apply to patenting of algorithms, grant inventorship and address relevant issues
To conclude this discussion, it seems that an AI regulatory agenda need not warrant re-inventing the regulatory wheel. Instead, it should incorporate two elements:
- the same gamut of regulations that would apply to a human being performing the same or a similar function. Some modifications may be made to current legislations or proposals at an advanced stage, so that AI can be addressed through them.
- additional regulations that would address and manage the risks posed by AI that are inherently unique to such systems and distinguish them from the rest.
Rajnish Gupta - Associate Partner, Tax and Economic
Policy, Ernst & Young, India.
Natasha Nayak - Senior Manager, Tax and Economic Policy, Ernst & Young, India.
The views expressed are personal and not the firms.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.