For the time poor readers, here's the TL;DR. Artificial Intelligence (AI) presents unique regulatory and other risks that need to be managed. The law in Australia today applies to AI, but regulatory changes have been proposed. The opportunity is greater than the risks. Learn to control the risks associated with using AI now or risk losing your job in years to come. The risks posed to Australian Financial Services licensees (AFSLs) and Australian Credit licensees (ACLs) are nuanced, and we explore how to manage some of those risks in this article.
Now, let's get into the detail. I'll start with some stats and a true story. Consider the following:
- According to Deloitte, more than a quarter of the Australian economy will be disrupted by generative AI, which means nearly $600 billion of economic activity faces disruption.1 Also, more than two-thirds of Australian businesses report using or actively planning to use AI systems in their business operations.2 McKinsey & Company estimate that AI and automation could contribute an additional 170 billion to 600 billion to Australia's GDP by 2030,3 alongside an associated increase of labour productivity by 0.1 to 1.1 points every year4 until 2030. Also, an International Monetary Fund report estimated AI might impact 60% of jobs in developed nations, such as Australia.5 The point? Generative AI produces opportunities that you, as a licensee, can seize today.
- UCLA Professor Eugene Volokh asked ChatGPT: "Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles." The generative AI program replied with an answer explaining that a law professor, Mr Turley, of Georgetown University Law Center, was accused of sexual harassment by a former student during a class trip to Alaska. The citation for the data was a Washington Post Article dated 21 March 2018. But wait, there's more. Importantly, Mr Turley has never taught at Georgetown University. Also, the Washington Post article doesn't exist. Mr Turley has never been to Alaska with any student, and he has never been accused of sexual harassment.6 The point? Generative AI sometimes produces unreliable data.7 This is an example of poor system performance – where errors in an AI output have caused distress and reputational harm. This is one of six harm categories identified by Professor Nicholas Davis and Lauren Solomon in a recent report titled "The State of AI Governance in Australia".8 Those harm categories contribute to three organisational risks that are amplified by AI systems: Commercial, Reputational and Regulatory.
This wouldn't be an AI article if I didn't ask ChatGPT for help. So, I asked the machine what financial service providers want to know about AI. It said (this is the short version):
One of the main overarching questions they often seek to answer is: "How can AI be effectively integrated into our financial services to improve efficiency, accuracy, and customer experience while complying with regulatory requirements?"
It then broke the question into 10 sub-questions, including "Which Specific AI Applications Should We Implement? How Can We Ensure Data Privacy and Security in AI Solutions? What Is the Cost-Benefit Analysis of AI Implementation? How Do We Manage Regulatory Compliance?...", and so on.
This article touches on the regulatory risk component in the context of Australian Financial Services Licensees (AFSLs) and Australian Credit Licensees (ACLs).
As Australia has yet to legislate AI-specific laws, it is currently regulated by laws that attempt to be technology-neutral. We have extracted the following examples from The State of AI Governance in Australia, below (used with permission):9
When an AI system (or director) ... | These laws may apply |
Misuses data or personal information |
|
Produces an incorrect output |
|
Provides misleading advice or information |
|
Provides unfair or unreasonably harsh treatment |
|
Discriminates based on a protected attribute |
|
Excludes an individual from access to a service |
|
Restricts freedoms such as expression, association or movement |
|
Causes physical, economic, or psychological harm |
|
Directors fail to ensure that effective risk management and compliance systems are in place to assess, measure and manage any risks and impacts associated with a company's use of AI | Corporations Act 2001 s180 |
Directors failing to be informed about the subject matter and rationally believe their decisions are in the best interests of the company, having properly considered the potential impact of those decisions | Corporations Act 2001 s181 |
Here's my extra section, for AFSLs and ACLs, which is in addition to those laws described above:
When an AI system ... | These laws may apply |
1. Provides general financial product advice to a retail client | The obligations under the Corporations Act
2001 regarding:
a. False or misleading representations (there are also
obligations under the ASIC Act 2001 that would apply, such
as misleading or deceptive conduct and unconscionable
conduct) |
2. Provides personal financial product advice to a retail client | The obligations under the Corporations Act
2001 regarding:
a. The matters covered in items 1(a)-(e) above. |
3. Suggests a credit contract to a consumer, or assists a consumer apply for a credit contract (these are forms of "credit assistance") | The obligations under the National Consumer
Credit Protection Act 2009 regarding:
a. General conduct obligations |
This table is not even nearly exhaustive, and depending on the interest it generates, we may release more guidance on how other activities are captured, for example, by AML/CTF obligations.
Is the Australian Government legislating for, or regulating, AI specifically?
The Australian Federal Government's $41.2 million commitment to support the responsible deployment of AI in the national economy in the 2023-24 Budget indicates that they've turned their mind to this issue.13 14
Similarly, ASIC has said that as part of its priorities for the supervision of market intermediaries in 2022-23, "We are undertaking a thematic review of artificial intelligence/machine learning (AI/ML) practices and associated risks and controls among market intermediaries and buy-side firms, including the implementation of AI/ML guidance issued by the International Organization of Securities Commissions (IOSCO)".15 In a recent address, ASIC chair Joe Longo reiterated ASIC's aims in the face of "rapidly and constantly evolving AI". They are:
- The safety and integrity of the financial system
- Positive outcomes for consumers and investors.16
And, the Department of Industry, Science and Resources (DISR) released its interim response to its consultations on their "Supporting responsible AI: discussion paper" on 17 January 2024. It concluded that the current laws and regulatory framework do not satisfactorily address AI risks, particularly prevention of these risks before they occur.
It outlined the government's intention to consider introducing mandatory obligations on the development or use of AI systems that present a high risk. Importantly, the report draws a distinction between high risk and low risk applications of AI, with higher obligations imposed on higher risk applications. The report does not define what constitutes high risk or low risk generative AI systems.
The interim report sets out 5 principles to guide the Government's interim response. These are:
- evaluating obligations on AI development/use on the level of risk posed by the AI
- balancing the need for innovation and competition with community interest considerations
- collaborating openly with experts and the public
- supporting global action on AI risks in line with the Bletchley Declaration17
- placing the needs of people and communities at the forefront of considerations.
The government has indicated its intention to ask the National AI Centre to create an AI Safety Standard to give practical guidance for industry to ensure AI systems being developed are safe and secure. It aims to work alongside industry to evaluate voluntary labelling and watermarking of AI-generated materials. Lastly, the DISR will establish an interim expert advisory group to further support the proposed AI guardrails.
All of the above is happening alongside existing regulatory reviews. The report indicates that submissions raised will be considered as part of reforms including the privacy law reforms. It also includes applying submissions to new laws regarding misinformation and disinformation, the review of the Online Safety Act 2021, prospective automated vehicle regulations, ongoing intellectual property reviews, competition and consumer laws impacted by digital platforms, and a framework for generative AI in school and the Government's cybersecurity strategy.
So, how do AFSLs and ACLs manage these regulatory risks?
As a licensee, you already have a risk management framework, to help you comply with your general obligation to have in place adequate risk management systems. We think it's time to dust it off and identify two new risks:
- The risk of missing the opportunities that AI presents; and
- The regulatory risks associated with using AI.
Remember, most of your staff are already using AI. So, you probably need to get onto this now.
Ways to control both risks include:
- Create a Policy. For starters, you should develop an AI policy for representatives. It should tell them not to do things like putting personally identifiable information or sensitive information into a search engine or AI system. Take a look at the Government's interim guidance for agencies on government use of generative Artificial Intelligence platforms, for some more ideas.18
- Train representatives – on the policy, and on the law more broadly. The training arm of Holley Nethercote performs lots of half-day sessions on emerging regulatory risks and opportunities, including AI. In late 2023, we trained over 100 licensees across multiple sessions, discussing reasonable controls to mitigate AI risks. We've also run (and are running) lots of similar in-house sessions for licensees, at the time of writing (mid-2024). We have a regulatory update service which includes legal commentary on the changes (it's not just a news service), via our HN Hub. I also personally recommend listening to podcasts and paid subscription services like Exponential View.
- Supervise. If you decide to use an AI system,
think of monitoring and supervising AI systems, like you're a
parent:
- When they're young (0-4), you're the caregiver. You feed them and change their nappies lots of times – close monitoring required!
- When they're pre-teen, you're the cop. You set the rules. As they approach teens, they'll push back a bit, but you'll still need to agree on minimum standards.
- When they're teenagers, you're their coach. You stay involved, check-in, review, and give feedback.
- When they're adults, you're their consultant. You never really stop being a parent. You need to check in regularly to see how they're going.
Every analogy falls down eventually, and being a "parent" is no exception. In terms of supervising a healthy, grown-up AI system, you need to have ongoing monthly reporting, measurement of error rates, evidence that staff are checking underlying assumptions, and a bunch of other things that exceed the scope of this article. Initially, you need to engage lawyers. We've been asked to review the outputs of AI bots, and it's not a quick job.
AI thought-leader, and previously Chief Business Officer for Google X, Mo Gawdat, says that people won't lose their jobs to AI, people will lose their jobs to people who use AI.19 So, what are you waiting for?
How can we help?
We can:
- Help licensees develop their risk management program from a regulatory risk perspective, with respect to AI opportunities and risks.
- Review licensees' AI systems to in light of regulatory obligations.
- Run in-house training on regulatory risks associated with AI, and how to manage them.
- Keep licensees up-to-date regarding regulatory changes via our HN Hub.
Footnotes:
1 Generative AI: A quarter of Australia's
economy faces significant and imminent disruption | Deloitte
Australia
2 HTI The State of AI Governance in Australia –
31 May 2023.pdf | University of Technology Sydney
(uts.edu.au)
3 Supporting responsible AI: discussion paper –
Consult hub (industry.gov.au)
4 Generative AI and the future of work in Australia
| McKinsey
5 AI Will Transform the Global Economy. Let's
Make Sure It Benefits Humanity. (imf.org)
6 ChatGPT falsely accused me of sexual harassment.
Can we trust AI? (usatoday.com)
7 There's a similar event that happened closer to
home, more recently: Victorian Mayor Brian Hood was wrongly named
by ChatGPT as a guilty party who served prison time due to a bribery scandal. The small issue: Brian was
the whistleblower in this case and was never charged. This instance
of a "hallucination" (where AI generates incorrect or
misleading results) constituted a considerable reputational risk to
an individual whose profession depends on reputation.
8 HTI The State of AI Governance in Australia –
31 May 2023.pdf | University of Technology Sydney
(uts.edu.au)
9 HTI The State of AI Governance in Australia –
31 May 2023.pdf | University of Technology Sydney (uts.edu.au)
page 36.
10 ASIC's Regulatory Guide 255: Providing
digital financial product advice to retail clients, provides a
thorough summary of what ASIC expects in terms of complying with
Corporations Act obligations.
11 Corporations Amendment (Professional Standards of
Financial Advisers) Act 2017 (legislation.gov.au)
12 For example, responsible manager needs at least two
years of relevant problem-free experience, and either a credit
industry qualification to at least Certificate IV level, or other
higher level qualifications. See RG 206 Credit licensing: Competence and training |
ASIC.
13 The allocation of funding is hardly impressive.
China's spending on AI, for example, is expected to surpass $38
Billion by 2027: To really grasp AI expectations, look to the
trillions being invested | World Economic Forum
(weforum.org)
14 Investments to grow Australia's critical
technologies industries | Department of Industry, Science and
Resources
15 ASIC's priorities for the supervision of
market intermediaries in 2022–23 | ASIC
16 https://architecture.digital.gov.au/guidance-generative-ai
17 The Bletchley Declaration by Countries Attending
the AI Safety Summit, 1–2 November 2023 | Department of
Industry Science and Resources
18 The Bletchley Declaration by Countries Attending
the AI Safety Summit, 1–2 November 2023 | Department of
Industry Science and Resources
19 Mo Gawdat podcast: EMERGENCY EPISODE: Ex-Google Officer Finally Speaks
Out On The Dangers Of AI! – Mo Gawdat | E252 –
YouTube.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.