ARTICLE
11 November 2024

Privacy and AI – new guidance from OAIC

HR
Holding Redlich

Contributor

Holding Redlich, a national commercial law firm with offices in Melbourne, Canberra, Sydney, Brisbane, and Cairns, delivers tailored solutions with expert legal thinking and industry knowledge, prioritizing client partnerships.
New guidance to help organisations and developers comply with their privacy obligations when using AI.
Australia Privacy

The Office of the Australian Information Commissioner (OAIC) has released new guidance to help organisations and developers comply with their privacy obligations when using AI.

The guidance is divided into two parts – the first targets developers who are in the initial stages of developing and fine-tuning their AI models, while the second is directed at organisations that deploy AI in their business operations.

AI practices that could breach the Australian Privacy Principles

The guidance sets out ways in which AI development, deployment and use can amount to breaches of the Australian Privacy Principles (APPs). For example:

  • failing to understand the risks associated with using AI could put your organisation at risk of breaching its obligations under APP 1 (to manage personal information in an open and transparent way)
  • AI products that generate or infer personal information do not collect it directly from the individuals concerned, as required by APP 3. This principle states that personal information should be collected directly from the individual unless it is unreasonable or impracticable to do so. To comply with APP 3, organisations must demonstrate that such collection is indeed unreasonable or impracticable and that it is reasonably necessary for their functions or activities
  • data sets, some of which include personal information, are used to train AI. When an individual provides an organisation with their personal information, they may not expect it to be used to train an AI model, which could breach APP 6
  • it might be difficult for an individual to understand how AI products work and where the data used by the products will be transferred to, which risks breaching APPs 5 and 6
  • personal information can become outdated. An organisation should ensure that the personal information collected, used and disclosed is accurate, relevant, and up to date (as required by APP 10). This can be difficult when the information has been used to train an AI model, as extracting the relevant record may be impossible. Organisations should make it clear to any end users that the data generated by AI products may be inaccurate.

While there is an obvious need for organisations to ensure compliance with their legislative obligations, including compliance with the APPs, it is also prudent to assume that contractors may access and use AI products when supplying goods or services to your entity. Organisations should consider whether their standard contract terms appropriately address privacy risks that arise from AI use by contractors.

Access the OAIC's guidance for developers here and guidance for businesses using AI here.

If you have any questions about the guidance or require assistance with privacy complaints and breach allegations, advice on AI usage in your business, reviewing commercial contract terms, drafting Privacy Impact Assessments or general privacy advice, please get in touch with our team below.

This publication does not deal with every important topic or change in law and is not intended to be relied upon as a substitute for legal or other advice that may be relevant to the reader's specific circumstances. If you have found this publication of interest and would like to know more or wish to obtain legal advice relevant to your circumstances please contact one of the named individuals listed.

Find out more and explore further thought leadership around Privacy Law and Privacy Regulations

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More