Australian Government Contemplates Asimov's Omnibus

KG
K&L Gates

Contributor

At K&L Gates, we foster an inclusive and collaborative environment across our fully integrated global platform that enables us to diligently combine the knowledge and expertise of our lawyers and policy professionals to create teams that provide exceptional client solutions. With offices spanning across five continents, we represent leading global corporations in every major industry, capital markets participants, and ambitious middle-market and emerging growth companies. Our lawyers also serve public sector entities, educational institutions, philanthropic organizations, and individuals. We are leaders in legal issues related to industries critical to the economies of both the developed and developing worlds—including technology, manufacturing, financial services, health care, energy, and more.
Amid the rapid acceleration of tools like ChatGPT and global calls for tailored regulation of artificial intelligence tools, the Australia Federal Government has released a discussion paper...
Australia Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Amid the rapid acceleration of tools like ChatGPT and global calls for tailored regulation of artificial intelligence tools, the Australia Federal Government has released a discussion paper on the safe and responsible use of AI. The Government is consulting on what safeguards are needed to ensure Australia has an appropriate regulatory and governance framework to manage the potential risks, while continuing to encourage uptake of innovative technologies.

A key focus of the discussion paper is transparency. Large language models (LLMs) like ChatGPT and other machine learning algorithms are often opaque, relying on the dataset they have been training on to deliver outcomes in ways which are difficult to analyse from the outside. The Government is exploring the extent to which businesses using AI tools should be required to disclose publically the training datasets and how decisions are made, as well as allowing consumers affected by AI-powered decisions to request detailed information about the rationale. This builds on proposals from the Attorney-General's review of the Privacy Act to require businesses to disclose when they use personal information in automated decision-making systems and, when requested, inform consumers how automated decisions are made, following similar requirements under the GDPR in Europe.

The discussion paper also highlights the need for supervision and oversight of AI systems, as they are being rolled out. For AI use cases that have an enduring impact on persons, such as the use of AI in hiring and employee evaluation, the discussion paper contemplates frequent internal monitoring of the tool, as well as training for relevant personnel.

Consultation closes 26 July 2023 and submissions can be made to DigitalEconomy@industry.gov.au.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More