- within Technology, Media, Telecoms, IT, Entertainment and International Law topic(s)
- with Finance and Tax Executives and Inhouse Counsel
- in European Union
- with readers working within the Law Firm industries
In this special episode of Essential ESG, Phoebe Wynn-Pope and James North discuss the fast-evolving landscape of responsible artificial intelligence (AI) governance.
As AI technologies continue to transform industries, the legal and regulatory frameworks surrounding them are shifting just as rapidly. With AI regulation in flux globally, Phoebe and Jamesexplore why proactive AI governance is critical - not only for managing legal risks and navigating emerging regulations, but also in unlocking AI's productivity potential and building stakeholder trust.
Essential ESG is a podcast series presented by Corrs that breaks down topical issues affecting the rapidly evolving environmental, social and governance landscape in Australia and beyond.
Transcript
Speakers:
- Phoebe Wynn Pope, Head of Responsible Business
and ESG, Corrs Chambers Westgarth
- James North, Head of Technology, Media and Telecommunications, Corrs Chambers Westgarth
Phoebe: Welcome to another edition of Essential ESG coming to you from the lands of the Gadigal people of the Eora Nation. My name is Phoebe Wynn-Pope and I am the Head of Responsible Business and ESG here at Corrs Chambers Westgarth and I am joined today by James North, Head of Technology, Media and Telecommunications, our TMT team. James practices in the converging fields of telecommunications, media and technology and is a key adviser to leading digital infrastructure and some of the most innovative technology companies in the world. James, welcome to the podcast.
James: Thank you, Phoebe.
Phoebe: There's a lot to discuss on AI today. It's got extraordinary potential to transform many areas of our life but there's also an extraordinary amount of uncertainty about it and the issues that it raises. Before we dive into some of those, which we're going to do, let's get everybody onto the same page. Can you give us some insight into what we mean when we say 'AI' and 'responsible AI'. What do we mean by that?
James: AI obviously is artificial intelligence. It's technology that effectively mimics the reasoning and decision-making power of human beings. It's become very prominent in the last few years. It's been around for a long time, since the 1950s and 1960s but in a sense it has become democratised recently with the release of large language models like ChatGPT. By democratised I mean accessible to people who are not technically trained, they're not software engineers. Now we have AI on our phones, it's very prominent in our daily lives, it's become a real talking point amongst the community and in business. And in terms of what a responsible AI is, it's not much different to any other form of responsible business in the sense that you're asking yourself two questions when you're deploying a business initiative or in this case deploying technology. Firstly, is it legal, but it's going a step further from that and saying well we can do this. It's legal, but should we? And that involves a consideration of how the use of this technology is going to impact on your stakeholders such as your employees, your customers and the broader public and considering whether that's going to do harm or good and if it's going to do harm, considering whether those harms can be mitigated in a way that is reasonable in the circumstances.
Phoebe: So can you give us an example of the sorts of things that have gone wrong or why responsible AI has become such a talking point. Is it in part because there's been some bad experiences, right?
James: One sort of very well-known example in the area of recruitment, in the early days of the development of AI some large companies trained some algorithms to conduct screening of employment applications and with the best intentions, they wanted to find employees who would be successful in their organisation. So they trained the model using the CVs and the career information related to their senior executives and, not surprisingly, they tended to be white men. And so the algorithm learned from that information and started screening out ethnic minorities and women from the job application process. It wasn't designed to do that, just because of the power of the model, it started to take factors such as gender and ethnicity into account when choosing successful employment applicants. So that's a good example of where - we talk about biased data in AI, historical biases from the real world get perpetuated when they're supplied in the context of AI technology.
Phoebe: Because that is trained on historical data, not data from today.
James:That's right.
Phoebe: And so a lot of those historical norms are reflected in how it thinks.
James: Yes that's right. Companies nowadays and even back then, will go to extensive lengths to mitigate or prevent those biases. For example, screening gender, but the power of the models means that they can find sometimes inadvertently without the intention of the engineers developing the model, they can find proxies for that data. For example, the type of sport that someone plays might be a proxy for gender.
Phoebe: Or the school they went to.
James: Might be a proxy for their ethnicity in certain parts of the world.
Phoebe: And sometimes postcodes and things like that.
James:Exactly.
Phoebe: So you're working with a lot of clients on this. When they're thinking about these things, why is it important for businesses to really be going through this and thinking about what AI they're deploying and how that's working for them?
James: Well I think it's incredibly important because the benefits of AI for Australia, and Australian business in particular, are potentially enormous. We have a productivity problem in this country, that's well-known.AI is a potential solution for that or part of the solution. We have skill shortages in technology in Australia that has been a constant problem for the Australian tech industry. AI potentially solves part of that problem. But on the flipside, Australians trust AI less than perhaps any other population in the world.
Phoebe: Is that right?
James: By a significant majority, Australians believe that the risks of AI outweigh the benefits. So there is a massive trust gap. And so for organisations to successfully deploy AI in their businesses to get this productivity gain, they're going to have to build trust with their employees, with their customers, with regulators and with the broader public otherwise it's just not going to be successful.
Phoebe: That's really interesting. Is there a particular reason for that trust gap in Australia compared to other countries or are we just ...
James: I don't know the causes of that.
Phoebe: Yes it's fascinating to see ...
James: Yes it is fascinating. You can compare it to a country say like Korea where the trust in AI is much higher. Whether they're a more technology-literate country, I don't know. Australians tend to be very fast adopters of new technology.
Phoebe: It's unusual.
James: It's an interesting statistic.
Phoebe: Very. It sounds like it might be a PhD in there for somebody but probably not you or I! So we've decided to use AI in our business and we're thinking about how to do it responsibly, what are some of the challenges we're going to face and how do we start thinking about deploying responsible AI, what does that look like?
James: Well I think in the first case, in the first instance, is identifying use of AI within the organisation. Because in all organisations there's going to be some form of authorised or unauthorised use of AI. If the company hasn't authorised the use of AI within the organisations, it's almost inevitable that the employees are using AI in a way to make their jobs easier and for other reasons. We call that 'shadow IT' where people are using technology outside the formal authorised systems of the organisation and it is particularly prevalent in AI at the moment. So that's the first thing, understand where the AI is being used in your organisation and then put in place mechanisms to govern it so the organisation has visibility and oversight and control of the use of AI. I think one of the most important things to consider in the first instance is the use case. There are a wide range of use cases for AI, they're always developing. Some are very high risk but many are very low risk. And obviously your governance efforts need to be focused on the higher risk use cases. One of the particular challenges for our clients in the use of AI is that most powerful AI models have been developed outside Australia. So Australian organisations will tend to be purchasers of AI models or deployers of AI models and so they might build an application for a particular use case on top of ChatGPT for example. But from a legal and probably from an ethical perspective, when you deploy AI into your organisation, you're going to have to be accountable for the outcome to your customers, to your employees to regulate us to the broader public. When you're using an application that you don't fully understand or a model that you don't fully understand and potentially can't fully explain to your customers, I think that's a real challenge that companies are grappling with at the moment. So if I, for example, put this in context, if I am an insurance company and I am using AI to triage and make decisions regarding claims by my customers and I am using that model to make that decision, can I explain to a disaffected customer why the model made a particular decision to, for example, reject the insurance claim. Now that's a fairly extreme example because most insurance companies aren't using AI in that way at the moment, they're using it to triage simple claims and they're generally using it as a tool to assist a claims assessor rather than on its own.
Phoebe: Making a decision by itself.
James: Yes. But that's where we're heading with agentic AI, is, AI will start to make decisions on behalf of organisations. From a legal perspective and from an ethical perspective, you need to be able to explain to the customer why that particular decision was made and that's challenging when you're relying on a third party's model to do that.
Phoebe: Yes because often the algorithms and other things sit in a black box right, so it's very hard for the black box, is that right?
James: You've been doing your research. That's called 'the black box problem' in AI in the sense that a little bit like the human brain, some of the AI models are incredibly complex and a little bit like a psychologist or a psychiatrist observing a human's decision-making, you can't fully explain it. You can potentially observe it and identify when harmful things are occurring and step in to correct that. It is a little bit the same with the most advanced AI models because the complexity is such that you can't fully explain some of the decision-making.
Phoebe: Right. So you sort of need to have a human in the mix at some point don't you in terms of the transparency and actually being able to explain to somebody why a decision has been made and presumably also, as you say, to do that monitoring. So that if the AI is learning in a direction you don't want it to, that you pick that up early so that aberrations or unintended consequences are picked up?
James: Yes, what you would do in deploying AI for a particular use case is sit down before it was deployed, consider the potential harms to your stakeholders. If the model gets the answer wrong, what's the impact on the individual? Is this something that is very significant for that individual, like an employment decision or a decision as to whether they're insured or not? That's what we would call a relatively high risk use case. So you would want to do a significant amount of testing before that was used in a non-test environment and then you would want to be monitoring that to identify unexpected outcomes and be in a position to step in and correct that outcome if required.
Phoebe: In terms of regulation and frameworks, I think we could talk about those difficulties probably interminably because it's so interesting the different mix. But in terms of regulations and frameworks that exist to regulate AI, there's been a lot of discussion about that and not squashing innovation and I think in the US the President has lifted all regulation off AI at a Federal level and some of the States are trying to regulate it at a State level and then what's happening in Australia and what do you think is going to happen?
James: So we don't have an AI-specific law like the AI Act in the EU. The government has been considering that but I think they've been wanting to see how the international regulatory environment evolves. So on the one hand you've got the EU which some might say adopted a fairly heavy-handed regulatory approach to AI and then the US which was a light touch even under President Biden but now through President Trump's AI action plan, there has really been a focus on removing any regulatory constraints to the growth of AI in the US. Largely for economic reasons, President Trump wants the US tech to lead in the AI economy.
Phoebe: Yes.
James: In Australia, we have voluntary guardrails for the use of AI which can be used for organisations to guide the responsible use of AI. We also have a set of ethics, AI ethics principles, which are based on UNESCO's Ethical Principles for AI so we are largely aligned with the rest of the world in that area. There's been some discussion about making those voluntary guardrails mandatory, becoming part of a legislative framework but that is up for debate at the moment though. The Productivity Commission recently released a paper on AI and is recommending the Federal Government pause any specific regulation of AI.
Phoebe: Right. And so for businesses going forward, what are you seeing happen? Are they putting in place policies to try and help them frame the way their thinking about these things?
James: Yes, I think the challenge we've kind of got a, not a regulatory vacuum, but there's quite a lot of uncertainty about whether AI will be regulated or not or whether it will just fit within existing legislative frameworks like the Privacy Act, the Copyright Act. And so that uncertainty I think means it's even more important that organisations put in place their own frameworks to govern their use of AI so they can explain to their stakeholders what they're doing and what are they doing to make sure that the use of AI doesn't harm their stakeholders and that they are accountable for the use of that AI and they're open about it and transparent about when they're using AI and when they're not. Just going back to that point about Australians' distrust of AI, I think having those responsible AI frameworks put in place is really important, partly because of that distrust, but also because of this legislative uncertainty that we have.
Phoebe: Yes, and also presumably to keep across the enterprise, enterprise-wide consistency in terms of these sorts of things as well.
James: Absolutely.
Phoebe: Because we know with these very big businesses that they have different business units and if they're all using AI in a different way, that would create a lot of risk.
James: Yes, and going back to that shadow AI problem, does the organisation really know how AI is being used in its business. I would say a lot presently don't fully understand how AI is being used in the business. And the other point I think is really important, and this is not unique to AI - I think boards need to have oversight of technology and data usage in their companies for a number of reasons. AI is one, but also increasingly prescriptive and onerous privacy regulation, also expanding cyber risks. We think it is very important for boards to have technology and data governance skills at the board level potentially through a technology committee which is quite common in the US but it's not very common in Australia to have a technology committee on the board. What we do see is companies governing the use of AI and cyber through the audit and risk committee sometimes. I think that has a little bit of a negative connotation, or a negative effect, in the sense that you look at some of these technologies through a compliance and risk perspective only and you don't see - you don't view it in a positive sense for example...
Phoebe: Through the opportunities lens...
James: Yes, the potential for productivity gains for example.
Phoebe:So James this has been absolutely terrific. I am going to ask you one last question and that is can you tell us an example of some work that you've been doing where AI has been used for the public good?
James: We've recently been involved in a number of projects where AI has been used to defeat scammers, scammers who are looking to deprive Australians of their hard-earned earnings. Scams are a huge problem for Australia, there's a lot of victims of scamming. And in some senses AI is a threat there because the scammers are using AI to make their scamming activities more convincing.
Phoebe: Right.
James: You know, impersonating friends, impersonating relatives just to make that ...
Phoebe: Using people's voice to make calls.
James:...phishing attack or scams just really highly convincing. So from that perspective AI is frightening but a number of our clients in the banking industry and the telecommunications industry have been using AI to defend their customers and the broader Australian public against those attacks and using bots or algorithms that are able to effectively carry on a conversation with the scammer as if it was the individual they were trying to attack without the knowledge of the scammer to delay and disrupt the scammer, but also to identify the bank accounts they're using.
Phoebe: Right.
James: So try and get the scammer to disclose the bank account they want the funds transferred to in which case any fund transfer by any bank in Australia to that account is blocked. And to see this technology demonstrated and in use is really very impressive. I would find it very difficult to identify the fact that this is not a human being on the end of the phone. They have personalities, the quality of their language is incredible, they respond instantly to the scammer's questions. It's really a very impressive application of AI for the benefit of Australia generally.
Phoebe: Well that's great to hear. Thank you very much James for joining us today and I think we are going to have to have another session because there are all sorts of other things that we haven't covered like data centres, environmental challenges and a whole range of things. We'll have you back on soon.
James: That sounds fun, thank you.
Phoebe: Thanks.
Listen and subscribe to Essential ESG on:
This podcast is for reference purposes only. It does not constitute legal or other advice and should not be relied upon as such. You should always obtain legal advice about your specific circumstances.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
![]() |
![]() |
Lawyers Weekly Law firm of the year
2021 |
Employer of Choice for Gender Equality
(WGEA) |