The EU's recently released 'Coordinated Plan on Artificial Intelligence', and the introduction of the 'Algorithmic Accountability Act' as a bill in the US highlight the importance that governments around the world are placing on AI and its expected impact on society and the global economy.

As similar legal and policy developments start to emerge in Australia – the recent release of Data61's Ethics Framework being one example – we consider whether the approaches being taken to regulate AI in key overseas jurisdictions like Europe and the US are influencing AI policy-making in Australia.

What is the EU's Coordinated Plan?

The Coordinated Plan aims to foster the development and use of AI and robotics in Europe, and has a number of objectives, including the development of ethics guidelines and ensuring the EU remains competitive in the AI sector. The plan also proposes joint action by EU Member States in four key areas:

  1. Increasing investment: At least €20 billion of public and private investments in research and innovation in AI through to the end of 2020.
  2. Making more data available: Increasing data sharing across borders.
  3. Fostering talent: Supporting advanced degrees in AI.
  4. Ensuring trust: Developing Ethics Guidelines.

What are the EU Ethics Guidelines?

Of the four key areas, the Ethics Guidelines are of most interest from a regulatory perspective. The Ethics Guidelines proposed in the Coordinated Plan are designed to 'maximise the benefits of AI while minimising its risks'.

Following the publication of draft Ethics Guidelines in December 2018 (which received more than 500 comments), revised Ethics Guidelines were released by the EU's High Level Expert Group on Artificial Intelligence on 8 April 2019.

These are focused on creating a concept of 'Trustworthy AI', which is comprised of three core components which should be met throughout an AI system's life cycle:1

  • the AI should be lawful, meaning it must comply with all applicable laws and regulations;
  • the AI should be ethical, by adhering to ethical principles and values; and
  • the AI should be robust, from both a technical and a social perspective (as even with good intentions, AI can cause unintentional harm).

These three core components underpin the following seven requirements for 'Trustworthy AI', many of which closely align with existing privacy laws, and particularly the EU General Data Protection Regulation (GDPR).

  1. Human agency and oversight.
  2. Technical robustness and safety.
  3. Privacy and data governance.
  4. Transparency.
  5. Diversity, non-discrimination and fairness.
  6. Societal and environmental well-being.
  7. Accountability.

The Ethics Guidelines are addressed to all 'stakeholders' (any person or organisation that develops, deploys, uses or is affected by AI), and are intended to go 'beyond a list of ethical principles, by providing guidance on how such principles can be operationalised in socio-technical systems'. The Ethics Guidelines also include practical checklists that stakeholders can use when implementing AI into their organisations.

In addition to the immediate potential uses for stakeholders, the Ethics Guidelines are designed to foster discussion on an ethical framework for AI at a global level, and are likely to be an influential reference document for policy and lawmakers around the world, including those in Australia.

For law makers and lawyers, the Ethics Guidelines also provide an insight into how laws may need to adapt to deal with the increasing use and prevalence of AI. While it is not proposed that the Ethics Guidelines be legally binding, the EU Commission has revealed that stakeholders will be able to voluntarily endorse and sign up to a 'pilot phase' to test whether the guidelines can be effectively applied to AI systems in June 2019.

In addition to considering compliance with EU standards and laws, the ability to voluntarily endorse (and apply) the Ethics Guidelines may become an important step for AI businesses in Australia that are considering entry into the EU market.

The US 'Algorithmic Accountability Act'

In early April 2019, the 'Algorithmic Accountability Act' was introduced as a bill to the US Congress.

If passed, the bill would require certain organisations to conduct 'automated decision system impact assessments' and 'data protection impact assessments' for algorithmic decision-making systems (including AI systems). In short, affected organisations would be required to proactively evaluate their algorithms to prevent inaccurate, unfair, biased or discriminatory decisions.

The bill would place regulatory power in the hands of the US Federal Trade Commission, the same agency with responsibility for consumer protection and antitrust regulation. It would apply to organisations with annual revenue above US$50 million, and also to data brokers and businesses that hold data for over one million consumers.

While the introduction of the bill has been praised as an important step towards AI regulation, it is unclear whether or when it will become law, largely due to the current political environment in the US. It is, however, likely to remain an important topic leading into the 2020 US elections, with multiple large US tech companies increasingly under the spotlight for their use of automated decision making systems.

AI policy developments in Australia

As part of the 2018 Federal Budget, the Federal Government pledged to invest almost $30 million towards improving Australia's capability in AI and machine learning.

Of this investment, the Government has allocated approximately $3 million to Data61 (a division of the CSIRO) to develop an AI 'technology roadmap' and an AI 'ethics framework'. It is intended that these documents will help to pave the way forward for AI innovation and policy making in Australia. It is understood that the remainder of the $30 million Federal Government investment is to be distributed among several organisations including Standards Australia and Co-operative Research Centres.

On 5 April 2019, Data61 released its discussion paper titled Artificial Intelligence: Australia's Ethics Framework. The purpose of the paper is to encourage a conversation about how Australia develops and uses AI, and makes direct reference to the developments in Europe (and elsewhere), demonstrating that the Australian draft framework has, unsurprisingly, been influenced by the approach being taken to regulate AI in key overseas jurisdictions like Europe.

The Data61 paper bases the proposed ethics framework on eight 'Core Principles for AI', which are designed to guide organisations in the use or development of AI systems:2

  1. Generates net-benefits. The AI system must generate benefits for people that are greater than the costs.
  2. Do no harm. Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.
  3. Regulatory and legal compliance. The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws.
  4. Privacy protection. Any system, including AI systems, must ensure people's private data is protected and kept confidential plus prevent data
  5. breaches which could cause reputational,
  6. psychological, financial, professional or other types of harm.
  7. Fairness. The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the "training data" is free from bias or characteristics which may cause the algorithm to behave unfairly.
  8. Transparency & Explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.
  9. Contestability. When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.
  10. Accountability. People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.

The principles have clear similarities to the seven requirements for 'Trustworthy AI' included the EU Ethics Guidelines.

The Commonwealth Department of Industry, Innovation and Science has invited written submissions on the proposed Australian ethics framework from industry and other interested parties, including:

  • Are the principles put forward in the discussion paper the right ones? Is anything missing?
  • Do the principles put forward in the discussion paper sufficiently reflect the values of the Australian public?
  • As an organisation, if you designed or implemented an AI system based on these principles, would this meet the needs of your customers and/or suppliers? What other principles might be required to meet the needs of your customers and/or suppliers?
  • Would the proposed tools enable you or your organisation to implement the core principles for ethical AI?
  • What other tools or support mechanisms would you need to be able to implement principles for ethical AI?
  • Are there already best-practice models that you know of in related fields that can serve as a template to follow in the practical application of ethical AI?
  • Are there additional ethical issues related to AI that have not been raised in the discussion paper? What are they and why are they important?

Submissions in response to the Data61 discussion paper are due by 31 May 2019, and the Consultation Hub can be accessed here.

What's on the horizon?

The EU Coordinated Plan and the introduction of the 'Algorithmic Accountability Act' as a bill in the US underline the importance that governments are placing on AI and its expected impact on society and the global economy.

It is likely that, in due course, an independent certification process will be developed for AI systems similar to the 'Conformité Européenne' or 'CE' marking of electronic devices. CE marking has become a respected and internationally recognised certification that indicates that a product conforms with particular health, safety, and environmental standards.

A certification process for AI systems would clearly need to be more sophisticated and address a wide range of matters, including those in the Ethics Guidelines. It would also provide suppliers of AI systems with a service mark that could be used to provide consumers with confidence that the AI system has been independently verified to meet certain standards.

It is certainly an exciting time for regulatory and policy developments relating to AI. We will continue to monitor and report on AI regulatory developments overseas and in Australia.

Footnotes

1Ethics Guidelines for Trustworthy AI – High-Level Expert Group on Artificial Intelligence (8 April 2019).

2 https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/supporting_documents/ArtificialIntelligenceethicsframeworkdiscussionpaper.pdf

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Chambers Asia Pacific Awards 2016 Winner – Australia
Client Service Award
Employer of Choice for Gender Equality (WGEA)