On 22 June 2020, the British Institute of International and Comparative Law (BIICL) hosted a webinar on the future regulation of artificial intelligence. The event was led by Lord Clement-Jones CBE – former chair of the House of Lords Select Committee on and Co-Chairman of the All-Party Parliamentary Group on – and included Prof Christopher Hodges of Oxford University, Paul Nemitz a principle advisor at the European Commission, Jacob Turner of Fountain Court Chambers and Claudia Pagliari of Edinburgh University.

The need for regulation is being recognised globally and ever more pressingly. Germany's Data Ethics Commission released a report with its recommendations for the EU in October 2019. This year, the European Commission published its White Paper on Artificial Intelligence and the UK's Centre for Data and Innovation published its Barometer; all of which hold common concerns at their core. Against this background, the webinar covered a number of issues around why should be regulated, how and by whom.

In this blog, we pick out three of the key threads.

Trust-based regulation vs deterrence-based regulation?

Prof Hodges suggested that what he described as the 'traditional model of regulation' – whereby compliance is secured using a legalistic model based on deterrence – will not work for . This is because of what he sees as the inability of a state-based model to effectively police the global companies that will be at the forefront of .

As an alternative to a deterrence-based model, he posited a need to build a regulatory system that is built on fairness – where the players are compliant because they see it is the right thing to do, rather than acting solely out of fear of enforcement.

Prof Hodges pointed to changing ethical and cultural approaches in relation to the traditional debate over shareholder vs stakeholder value. He highlighted sectors such as civil aviation and the nuclear industry that have developed an open culture built on trust. This involves sharing information, including owning up to mistakes, which allows the industry to learn.

Prof Hodges argued that whilst large fines capture the attention of a company's board, they end up being accepted as part of the cost of doing business, an expense like any other.

In addition, stringent enforcement measures can lead to regulators being seen as the enemy, reducing opportunities for collaboration and development. It can also exacerbate the problem of information asymmetry between the company and the regulator as the company seeks to hide or deny any breach of the rules.

By contrast, Mr Nemitz argued that democratised rules with high fines are important because they provide the necessary impact. However, he agreed that a positive dialogue is needed, alongside deterrents, in order to enable good practices.

In order to develop a fair system built on trust, Prof Hodges set out a number of questions to form the framework in which to create clear and universal rules:

  • Who will act as the equivalent of a global parliament to make the coherent consistent rules?
  • How will the objectives, principles and rules be determined?
  • What evidence will be required to show adherence?
  • Who will monitor adherence?
  • Who will investigate and resolve any non-adherence?

Jurisdictions and sectors

In response to Prof Hodges' trust-based approach, Lord Clement-Jones highlighted the degree to which trust depends on cultural factors that can vary from country to country, citing the higher trust of robots in Japan than in European countries as an example. Prof Hodges suggested that people will have a cross cultural understanding of right and wrong that will underpin their decisions on who can be trusted, allowing for a global regulatory matrix into which companies will be able to integrate their different cultural approaches.

Mr Nemitz, on the other hand, argued that regulation based on global consensus is not a realistic solution given the length of time that it can take for such consensus to emerge. States cannot be told to wait for such consensus before taking their own decisions to regulate the actions of, and risks brought by, companies in their markets. For him, the example of the internet is a lesson in the dangers of letting technologies develop unregulated.

He suggested that has for a long time been governed by ethical principles and neoliberal philosophies that allow global companies to go about their business relatively unimpeded by hard regulation. Perhaps unsurprisingly for someone from the European Commission, Mr Nemitz stated that regulations need to be introduced to ensure a level playing field within which all players are held to account. He alluded to the globally recognised principle that if a company sells goods and services into a market then it is subject to the rules of that market, even if the company is based outside the relevant jurisdiction.

Mr Nemitz also underlined the importance of sectorial regulation of . He pointed to the European Commission's White Paper, which proposes there should be a handful of basic rules that apply in all situations but that the specifics should be defined on a sectorial basis:

  1. Risk stratification - those who produce or use it in the market must undertake an impact assessment of what risks they are creating with the 's intended and unintended use.
  2. Robustness - the must be doing what it is intending to do and perform correctly.
  3. Human oversight - where a human takes over from an algorithm needs to be secured.
  4. Specificity - there will be specific requirements for certain applications, has to comply with all the rules of the sector. People must know they are dealing with and not a human being.
  5. Limitations - must not do what a human is forbidden from doing.

Whilst broadly agreeing, with these rules, barrister Jacob Turner challenged the last point. He argued that where certain legal limitations are imposed on humans based on their capacities, if is not subject to the same human frailty then there is no justification for imposing the same limitations. He gave the example of the 70mph speed limit imposed for safety reasons on the UK's roads and suggested that if self-driving cars can be developed to drive safely at 150mph, they should be allowed to do so.

Hard regulations and moral obligations

Mr Turner grouped the existing framework for regulation into three categories:

  1. existing rules of general application which impact ,
  2. rules which are designed specifically for , and
  3. self-imposed restraints by companies.

He cited the General Data Protection Regulation as the main piece of existing regulation affecting , in particular articles 13 – 15 which relate to automated decision-making. He saw the 'right to explainability' as the most important of those rules, under which the process behind significant decisions made by algorithms must be capable of understandable explanation. However, can sometimes be hard to explain in a traditional manner.

Although the White Paper from the European Commission is to be welcomed, elements of it are controversial and there is still a large gap in terms of -specific regulation. In order to fill this vacuum, companies are attempting to self-regulate. Mr Turner pointed to the recent action by Microsoft, along with Amazon and IBM, banning their facial recognition software from being used by police in the US. Mr Turner welcomed such moves by companies and recommended the introduction of internal ethics committees.

Claudia Pagliari agreed that hard regulation is required, but suggested that good behaviour should also be encouraged in other ways. For example, she pointed to the idea of integrity as a new form of capital, pointing to Apple as an example of a company that seeks to build trust and project integrity as part of its brand. As such, self-regulation should be both encouraged and rewarded both by governments and customers.

Conclusion

All of the participants agreed that the regulation of AI is very much in its infancy.

However, it is important to note the point made by Mr Nemitz - companies often welcome clear regulation because of the certainty it brings, and the ability to direct innovation within clear tramlines.

Mr Turner agreed that it is a false dichotomy that regulation and innovation are opposed. Regulation can benefit business as it allows for planning and provides certainty.

Regulation is therefore important both from a commercial perspective and from a public policy perspective. It will also be key to developing the trust that many of the webinar's participants considered important as the public will only trust companies so far in favouring ethics over profit. And that trust will be essential, particularly if governments do begin asking people to travel in self-driving cars at speeds of 150mph or to place their health in the hands of automated diagnostic software. The public will want to know that the state stands behind such technology, that it oversees its standards, and that it has the power and the willingness to step in and enforce should anything go wrong.

Read the original article on GowlingWLG.com"

Originally published 29 July, 2020

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.