Is AI a threat or opportunity for Australian Financial Services and Australian Credit Licensees, today?

For the time poor readers, here's the TL;DR (too long; didn't read) version: Artificial Intelligence (AI) presents unique regulatory and other risks that need to be managed. The law in Australia today applies to AI, but regulatory changes will come. The opportunity is greater than the risks. Learn to use AI now or risk losing your job in years to come.

Now, let's get into the detail. I'll start with some stats and a true story. Consider the following:

  1. According to Deloitte, more than a quarter of the Australian economy will be disrupted by generative AI, which means nearly $600 billion of economic activity faces disruption.1 Also, more than two-thirds of Australian businesses report using or actively planning to use AI systems in their business operations.2 The point? Generative AI produces opportunities that you, as a licensee, can seize today.
  2. UCLA Professor Eugene Volokh asked ChatGPT: "Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles." The generative AI program replied with an answer explaining that a law professor, Mr Turley, of Georgetown University Law Center, was accused of sexual harassment by a former student during a class trip to Alaska. The citation for the data was a Washington Post Article dated 21 March 2018. But wait, there's more. Importantly, Mr Turley has never taught at Georgetown University. Also, the Washington Post article doesn't exist. Mr Turley has never been to Alaska with any student, and he has never been accused of sexual harassment.3 The point? Generative AI sometimes produces unreliable data. This is an example of poor system performance – where errors in an AI output have caused distress and reputational harm. This is one of six harm categories identified by Professor Nicholas Davis and Lauren Solomon in a recent report titled "The State of AI Governance in Australia".4 Those harm categories contribute to three organisational risks that are amplified by AI systems: Commercial, Reputational and Regulatory.

This wouldn't be an AI article if I didn't ask ChatGPT for help. So, I asked the machine what financial service providers want to know about AI. It said (this is the short version):

One of the main overarching questions they often seek to answer is: "How can AI be effectively integrated into our financial services to improve efficiency, accuracy, and customer experience while complying with regulatory requirements?"

It then broke the question into 10 sub-questions, including "Which Specific AI Applications Should We Implement? How Can We Ensure Data Privacy and Security in AI Solutions? What Is the Cost-Benefit Analysis of AI Implementation? How Do We Manage Regulatory Compliance?...", and so on.

This article touches on the regulatory risk component in the context of Australian Financial Services Licensees (AFSLs) and Australin Credit Licensees (ACLs).

As Australia has yet to legislate AI-specific laws, it is currently regulated by laws that attempt to be technology-neutral. We have extracted the following examples from The State of AI Governance in Australia, below (used with permission):5

When an AI system (or director) ... These laws may apply
Misuses data or personal information
  • Privacy laws
  • Data-security obligations
  • Security of Critical Infrastructure Act
  • Risk management obligations
  • Confidentiality obligations
  • IP laws
Produces an incorrect output
  • Australia Consumer Law – product liability (if the organisation is a manufacturer) and consumer guarantees
  • Privacy laws if the output is personal information
Provides misleading advice or information
  • Australian Consumer Law – misleading and deceptive conduct, unconscionable conduct, false and misleading representation, consumer guarantees
Provides unfair or unreasonably harsh treatment
  • Australian Consumer Law – unconscionable conduct
  • Australian Consumer Law – consumer guarantees
Discriminates based on a protected attribute
  • Anti-discrimination laws
Excludes an individual from access to a service
  • Anti-discrimination laws if the exclusion relates to a protected attribute
  • Essential service obligations (e.g. electricity hardship and disconnection obligations)
  • Australian Consumer Law – unconscionable conduct
Restricts freedoms such as expression, association or movement
  • Human rights acts or charters in Victoria, Queensland, and ACT
Causes physical, economic, or psychological harm
  • Negligence, if there is a breach of a duty of care that causes harm
  • Work, health, and safety laws
  • Australian Consumer Law – product liability (if the organisation is a manufacturer) and consumer guarantees
Directors fail to ensure that effective risk management and compliance systems are in place to assess, measure and manage any risks and impacts associated with a company's use of AI Corporations Act s180
Directors failing to be informed about the subject matter and rationally believe their decisions are in the best interests of the company, having properly considered the potential impact of those decisions Corporations Act s181

Here's my extra section, for AFSLs and ACLs, which is in addition to those laws described above:

When an AI system ... These laws may apply
1. Provides general financial product advice to a retail client The obligations under the Corporations Act 2001 regarding:

a. False or misleading representations (there are also obligations under the ASIC Act 2001 that would apply, such as misleading or deceptive conduct and unconscionable conduct)

b. A licensee's general obligations, including to provide services efficiently, honestly and fairly, and to comply with the conditions on its licence, comply with the financial services laws, maintain competence, and have adequate resources to supervise6 the provision of financial services

c. The Design and Distribution regime to the extent any financial products are captured by that regime

d. Having an AFSL that covers the provision of financial product advice with respect to any financial products that the AI system provides advice on

e. Provision of a Financial Services Guide

f. Provision of a general advice warning

g. The AI-bot's trainer and at least one Responsible Manager meeting the training requirements of RG 146

2. Provides personal financial product advice to a retail client The obligations under the Corporations Act 2001 regarding:

a. The matters covered in item 1(a)-(e) above.

b. Provision of a Statement of Advice, and possibly a Product Disclosure Statement

c. Compliance with the Best Interests obligations (best interests duty, appropriateness requirement, conflicts priority rule, and more)

d. The AI-bot's trainer and at least one Responsible Manager meeting the professional standards imposed by a bundle of laws. For example, the human is likely to need to meet the requirements of a "relevant provider",7 which includes complying with the Code of Ethics, holding certain bachelor level qualifications, and being included on the financial adviser register

3. Suggests a credit contract to a consumer, or assists a consumer apply for a credit contract (these are forms of "credit assistance") The obligations under the National Consumer Credit Protection Act 2009 regarding:

a. General conduct obligations

b. Provision of a Credit Guide, Credit Proposal and Credit Quote (if necessary)

c. Where the credit assistance relates to credit contracts secured by mortgages over residential property – meeting the best interests obligations

d. Meeting responsible lending obligations, including preparing written assessment of suitability

e. Having an ACL that covers the provision of the credit activities with respect to any activities that the AI system performs

f. The AI-bot's trainer and at least one Responsible Manager must meet minimum competency requirements8

g. Also prohibition on misleading or deceptive conduct, unconscionable conduct and other obligations under the ASIC Act 2001, and Design and Distribution Regime under the Corporations Act 2001

This table is not even nearly exhaustive, and depending on the interest it generates, we may release more guidance on how other activities are captured, for example, by AML/CTF obligations.

Is the Australian Government legislation for, or regulating AI, specifically?

Australia has been slow to get off the mark in regulating AI. The Department of Industry, Science and Resources released a discussion paper titled "Supporting responsible AI: discussion paper" with a closing date of 4 August 2023. We're not aware of any recommendations flowing from that specific consultation at the time of writing, but more consultations like this will come, as will regulation. This is clear from the Federal Government's $41.2 million commitment to support the responsible deployment of AI in the national economy in the 2023-24 Budget.9 ASIC has said that as part of its priorities for the supervision of market intermediaries in 2022-23, "We are undertaking a thematic review of artificial intelligence/machine learning (AI/ML) practices and associated risks and controls among market intermediaries and buy-side firms, including the implementation of AI/ML guidance issued by the International Organization of Securities Commissions (IOSCO)".10

So, how do AFSLs and ACLs manage these regulatory risks?

As a licensee, you already have a risk management framework, to help you comply with your general obligation to have in place adequate risk management systems. We think it's time to dust it off and identify two new risks:

  1. The risk of missing the opportunities that AI presents; and
  2. The regulatory risks associated with using AI.

Remember, most of your staff are already using AI. So, you probably need to get onto this now.

Ways to control both risks include:

  1. Train. The training arm of Holley Nethercote is performing lots of half-day sessions on emerging regulatory risks and opportunities, including AI. In the month of November 2023, we will run six sessions to over 100 licensees across five states. We have a regulatory update service which includes legal commentary on the changes, via our HN Hub. I also personally recommend subscribing to podcasts and other useful information sources.
  2. Policy. For starters, you should develop an AI policy for representatives. It should tell them not to do things like putting personally identifiable information or sensitive information into a search engine or AI system. Take a look at the Government's interim guidance for agencies on government use of generative Artificial Intelligence platforms for some more ideas.11
  3. Supervise. If you decide to use an AI system, think of monitoring and supervising AI systems like you're a parent:
    • When they're young (0-4), you're the caregiver. You feed them and change their nappies lots of times – close monitoring required!
    • When they're pre-teen, you're the cop. You set the rules. As they approach teens, they'll push back a bit, but you'll still need to agree on minimum standards.
    • When they're teenagers, you're their coach. You stay involved, check-in, review, and give feedback.
    • When they're adults, you're their consultant. You never really stop being a parent. You need to check in regularly to see how they're going.

Every analogy falls down eventually, and this one's no exception. In terms of supervising a healthy, grown-up AI system, you need to have ongoing monthly reporting, measurement of error rates, evidence that staff are checking underlying assumptions, and a bunch of other things that exceed the scope of this article. Initially, you need to engage lawyers. We've been asked to review the outputs of AI bots, and it's not a quick job.

AI thought-leader, and previously Chief Business Officer for Google X, Mo Gawdat, says that people won't lose their jobs to AI, people will lose their jobs to people who use AI.12 So, what are you waiting for?

How can we help?

We can:

  1. Help licensees develop their risk management program from a regulatory risk perspective, with respect to AI opportunities and risks.
  2. Review licensee's AI systems to see if they output matters that trigger certain regulatory obligations.
  3. Run in-house training on regulatory risks associated with AI, and how to manage them.

P.S. I'm an evangelistic AI nerd. I think AI is the biggest technological advance that we will experience in our lifetimes. If you're an AFSL or ACL holder and would like to discuss AI and its role in financial services, send me an email (pauld@hnlaw.com.au) and let's get a coffee. I'm based in our Melbourne office but am regularly at our Sydney office and elsewhere.

Footnotes

1 Generative AI: A quarter of Australia's economy faces significant and imminent disruption | Deloitte Australia

2 HTI The State of AI Governance in Australia – 31 May 2023.pdf | University of Technology Sydney (uts.edu.au)

3 ChatGPT falsely accused me of sexual harassment. Can we trust AI? (usatoday.com)

4 HTI The State of AI Governance in Australia – 31 May 2023.pdf | University of Technology Sydney (uts.edu.au)

5 HTI The State of AI Governance in Australia – 31 May 2023.pdf | University of Technology Sydney (uts.edu.au) page 36.

6 ASIC's Regulatory Guide 255: Providing digital financial product advice to retail clients, provides a thorough summary of what ASIC expects in terms of complying with Corporations Act obligations.

7 Corporations Amendment (Professional Standards of Financial Advisers) Act 2017 (legislation.gov.au)

8 For example, responsible manager needs at least two years of relevant problem-free experience, and either a credit industry qualification to at least Certificate IV level, or other higher level qualifications. See RG 206 Credit licensing: Competence and training | ASIC.

9 Investments to grow Australia's critical technologies industries | Department of Industry, Science and Resources

10 ASIC's priorities for the supervision of market intermediaries in 2022–23 | ASIC

11 https://architecture.digital.gov.au/guidance-generative-ai

[12] Mo Gawdat podcast: EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! – Mo Gawdat | E252 – YouTube.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.