The Australian Human Rights Commission's Human Rights and Technology Final Report (Report) was recently tabled in the Australian Parliament. It is the culmination of a three-year national project by the Commission, and comes at a time of unprecedented technological growth and investment in AI. Corrs hosted a webinar to explore the report's key findings and recommendations. 

The event featured a presentation by Human Rights Commissioner Edward Santow who, while recognising the human rights risks that can arise in artificial intelligence (AI) informed decision-making, explained that the Commission ultimately recommended reforms to ensure that the law does not treat decision-making informed by AI differently to human decision-making. 

The Report 

The timing of the Report is particularly important as, on 18 June 2021, the Australian Government announced an A$124 million AI Action Plan to accelerate the development and adoption of new AI technologies by supporting the private sector, growing and attracting AI talent and directing AI toward solving national challenges. In this push to develop and adopt new AI and technologies it will be critically important for businesses designing and deploying AI to understand and consider the human rights implications.

We outline six of the Commission's key recommendations in respect to the use of AI, and offer guidance for companies seeking to develop or use AI technologies in Australia.

1. Recommended scope of regulation on AI-informed decision making 

The Final Report recommends regulation of 'AI-informed decision-making', defined as "a decision, or decision-making process, that is materially assisted by the use of an AI technology or technique, and where the decision has a legal or similarly significant effect for an individual."

This definition can be broken down into three elements:

a) Decision or decision-making process

The phrase 'decision or decision-making process' recognises that the use of AI in decision-making may affect a person's human rights through:

  • the outcome of a decision (such as an AI system automatically rejecting a bank loan to an individual on the basis of their gender); or

  • the decision-making process, such as an AI system including gender or race as a consideration in a creditworthiness assessment.

The Report recommends regulation to ensure that human rights are appropriately safeguarded against the use of AI in both final decisions and decision-making processes.

b) AI as a material factor in the decision

The Final Report recommends a materiality threshold to ensure that regulation does not capture uses of AI that play a trivial role in decision-making, which could chill innovation of important technology without a meaningful corresponding increase in human rights protection. For example, a human decision-maker recording their decision in a word processing application which runs on AI would not constitute material AI-informed decision-making.

c) Legal or similarly significant effect for an individual

Finally, the AI-informed decision must have a 'legal or similarly significant effect' for an individual. This expression is taken from the European Union's General Data Protection Regulation  (GDPR), where a 'similarly significant effect' has been described as an effect with an equivalent impact to a legal effect on an individual's circumstances, behaviour or choices.

The GDPR gives the examples of refusal of an online credit application, or e-recruiting practices as decisions with similarly significant effects and where human rights are likely to be engaged more broadly.

2. Focus on technology-neutral regulation

The Report emphasises that its focus is to recommend technology-neutral regulation - that is, regulation that ensures organisations are suitably accountable for their AI-informed decision-making without imposing more onerous obligations than those that apply to conventional human decision-making.

The Report stresses the importance of technology-neutral regulation to avoiding regulatory chill of beneficial AI innovation in Australia.

Key AI-related recommendations

The Report's most significant AI-related recommendations for companies include the following.

3. Liability for AI-informed decision-making

The Report recommends clarifying the law with a rebuttable presumption that a decision-maker's legal liability for its decision is not affected by the fact that the decision was AI-informed.

The main field of law where this clarification is likely to be relevant is anti-discrimination. AI systems make decisions based on analysis of large databases of past human-made decisions. If that data indicates a trend of bias (for example, due to historically prevalent prejudices), that bias may be replicated in the decisions made by the AI system. If a company makes an AI-informed decision which is discriminatory due to underlying bias in the data set, it may be liable for breach of anti-discrimination law.

However, the clarification importantly does not alter the ordinary liability position for corporate decision-making. For example, the manufacturer of defective AI technology may still be liable:

  • in negligence for designing the AI technology with the result that it makes discriminatory decisions; or

  • under the Australian Consumer Law  manufacturer's statutory guarantee as to acceptable quality of the AI technology.

The recommended liability scheme follows the fault-based liability norm in Australian law. This may be contrasted with proposed AI regulation in the European Union, which would impose strict joint and several liability upon companies that use 'high-risk' AI systems, as well as upon manufacturers, distributors and importers of defective products including AI systems. The proposed EU regime would create strict liability for defective AI systems across the entire supply chain.

4. Transparency of AI-informed decision-making

The Report also aims to ensure that companies cannot rely upon their use of AI-informed decision-making systems to avoid existing legal obligations of transparency. It makes three important recommendations relating to transparency:

  • entitlements to reasons for AI-informed decisions should include an entitlement to both a technical explanation and a plain English explanation of the decision;

  • users of AI-informed decision-making systems should be required to notify individuals affected by a decision;

  • if a party fails to comply with an order from a court or regulator to produce information or documents because of its use of artificial intelligence, then the court or regulator may draw an adverse inference about the decision-making process.

Particular examples of a company being unable to comply with such an order (and hence being subject to an adverse inference) include where:

  • the information relates to a decision made using a 'black box' AI-informed decision-making system which cannot provide reasons for its decisions; or

  • the AI-informed decision-making system is the proprietary technology of a third party (e.g. an outsourced service provider), and the company either does not have access to the relevant information or would infringe the third party's intellectual property rights by disclosing it.

5. Moratorium on biometric technology

Biometric technology is technology that uses an individual's physical or biological characteristics to identify or characterise that person. The Report highlights specific human rights risks of biometric technology, particularly facial recognition technology, which:

  • affects individual privacy,

  • can fuel harmful surveillance; and

  • can be prone to a high error rate which may lead to discriminatory outcomes.

Where facial recognition technology is used in what the Report calls 'high-stakes decision making', such as policing, errors in identification can lead to significant risks of injustice and other human rights infringement.

As a result, and in a significant exception to the Commission's technology-neutral approach, the Report recommends a range of specific legislative regulation on the use of biometric technologies, including facial recognition to provide stronger human rights protections. Until this is in place, the Report recommends a temporary moratorium on the use of any biometric technology including facial recognition in high-risk areas.

6. Human rights impact assessments

The Report also recommends that the Australian Government develop a tool to assist companies to undertake human rights impact assessments (HRIAs), in order to:

  • assess the human rights risks raised by their activities;

  • ensure that appropriate measures are put in place to address those risks; and

  • ensure that remedies are available for any human rights infringements.

While the proposed HRIAs are proposed as optional for the private sector, they would help companies minimise legal and human rights risks resulting from their use of AI. Companies that use HRIAs to build human rights considerations into their use of AI also increase their capability to develop relationships of trust with consumers and other affected individuals. These trust relationships are important for companies to minimise community resistance to the use of AI in their business.

Recommended actions for companies

In light of the Commission's recommendations, companies should conduct an internal audit of any AI systems that are already in use or are proposed for use. An audit is advisable because AI systems deployed by the company may not be clearly labelled as AI, but only as tools according to their ultimate function.

Once the company's AI systems are identified, the audit should also determine which AI systems are involved in AI-informed decision-making. It is particularly important to flag any biometric technology in high risk areas, as use of this technology may be affected by a moratorium if the Commissioner's recommendations are adopted.

For each AI-informed decision-making system, companies may implement legal and human rights safeguards such as conducting HRIAs for each system and introducing human oversight over the operation of the system to minimise the risk of unexpected bias in its decisions.

***

As companies look to a future in which AI will play an increasingly important role, it's important to take proactive steps to mitigate the risks that may be created by the burgeoning use of AI in business, and these risks include human rights risks.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Chambers Asia Pacific Awards 2016 Winner - Australia
Client Service Award
Employer of Choice for Gender Equality (WGEA)