ARTICLE
1 June 2025

Having AI On Your Privacy Policy

SF
Spruson & Ferguson

Contributor

Established in 1887, Spruson & Ferguson is a leading intellectual property (IP) service provider in the Asia-Pacific region, with offices in Australia, China, Indonesia, Malaysia, Philippines, Singapore, and Thailand. They offer high-quality services to clients and are part of the IPH Limited group, which includes various professional service firms operating under different brands in multiple jurisdictions. Spruson & Ferguson is an incorporated entity owned by IPH Limited, with a strong presence in the industry.
The Privacy Policy for an organisation should explicitly inform individuals about the use of their personal information in connection with AI.
Australia Privacy

Key takeaways

  1. The Privacy Policy for an organisation should explicitly inform individuals about the use of their personal information in connection with AI.
  2. Any collection notice should specify:
  3. any AI-related purposes for which personal information is being collected,
  4. the organisation's use of AI tools to generate personal information (where applicable), and
  5. any disclosures of personal information in connection with AI tools.
  6. If the AI developer has access to personal information processed through the AI tool, this should be disclosed.
  7. Any public facing AI tools (such as chatbots) should be clearly identified as such to external users, such as customers.

AI and Personal Information: Key Transparency Obligations

As organisations increasingly integrate AI into their operations, transparency in privacy policies and collection statements is more critical than ever to ensure compliance with Australian privacy laws. Further to our recent article regarding using patient data to train AI, the Office of the Australian Information Commissioner (OAIC) had issued guidelines on privacy and the use of commercially available AI products (OAIC AI Guidelines). These guidelines underscore the importance of clearly informing individuals about how their personal information is used when AI tools are involved, to ensure alignment with the Australian Privacy Principles (APPs) under the Privacy Act 1988 (Cth) (Privacy Act).

Recent amendments to the Privacy Act further reinforce these obligations by introducing mandatory obligations for organisations to include information in their privacy policies about how AI technologies use personal information for automated decisions that could reasonably be expected to significantly affect individuals' rights or interests.

The OAIC AI Guidelines, published on 21 October 2024, assist organisations to comply with their privacy obligations when using commercially available AI products. The guidelines cover AI adoption across various contexts, from purchasing AI tools to using publicly available tools. There are a number of APPs that may be relevant with respect to the use and adoption of AI by an organisation:

  • APP 1 outlines the requirements for organisations to manage personal information in an open and transparent way, including to take reasonable steps to implement practices, procedures and systems to ensure they comply with the APPs, and to have a clearly expressed and up-to-date privacy policy.
  • APP 5 requires organisations that collect personal information about an individual to take reasonable steps either to notify the individual of certain matters or to ensure the individual is aware of those matters, for example through the use of collection notices.
  • APP 6 requires an organisation that holds personal information to only use or disclose the information for a particular purpose for which it was collected (known as the 'primary purpose' of collection), unless an exception applies.

Privacy Policies and Collection Notices

The OAIC AI Guidelines emphasise the need to establish policies and procedures to facilitate transparency, enhance accountability, and ensure good privacy governance. Organisations should ensure that:

  1. Privacy policies explicitly inform individuals about the use of their personal information in connection with AI.
  2. APP 5 collection notices specify any AI-related purposes for which personal information is being collected, the organisation's use of AI tools to generate personal information (where applicable), as well as any disclosures of personal information in connection with AI tools.
  3. If the AI developer has access to personal information processed through the AI tool, this should be disclosed.
  4. Any public facing AI tools (such as chatbots) are clearly identified as such to external users, such as customers.

In addition, the OAIC AI Guidelines specify that organisations should establish procedures for explaining

  • AI-related decisions and outputs to affected individuals
  • train their staff to understand how the AI tools generate, collect, use or disclose personal information, and
  • how to provide meaningful explanations of AI outputs to affected individuals.

Use of Personal Information

If personal information is being input into an AI tool, APP 6 requires entities to only use or disclose the information for the primary purpose for which it was collected, unless they have consent or can establish the secondary use would be reasonably expected by the individual, and is related (or directly related, for sensitive information) to the primary purpose.

A secondary use may be within an individual's reasonable expectations if it was expressly outlined in a notice at the time of collection and in the organisation's privacy policy. Common exceptions include where:

  • the individual has consented to a secondary use or disclosure or
  • the individual would reasonably expect the entity to use or disclose their information for the secondary purpose, and that purpose is related to the primary purpose of collection (or in the case of sensitive information, directly related to the primary purpose).

Organisations should assess whether AI applications align with the originally stated purpose of collection. If using personal information in AI tools for other, secondary purposes, organisations should consider whether these will be authorised by one of the exceptions under APP 6. This should have been specified in a notice provided to the customer at the time of collection in accordance with APP 5.

Given the importance of ensuring that personal information is only used for the primary purpose it was collected for, and as otherwise provided by APP 6, it is essential that privacy policies and collection notices clearly specify how personal information will be used in connection with any AI tools.

On the horizon – automated decision-making and privacy policies

Recent amendments to the Privacy Act also introduce new requirements for businesses regarding automated decision making. These amendments will take effect on 10 December 2026, two years after the Privacy and Other Legislation Amendment Bill 2024 (the statute implementing the changes) received royal assent.

Under newAPP 1.7 an organisation will be required to disclose in its privacy policy if personal information will be used in the operation of the computer program to make a decision or do the thing that is substantially and directly related to making the decision, and that the decision could reasonably be expected to significantly affect the rights or interests of an individual.

The information which must be included in the privacy policy is set out in new APP 1.8and will include:

  • the kinds of personal information used in the operation of such computer programs
  • the kinds of such decisions made solely by the operation of such computer programs, and
  • the kinds of such decisions for which a thing, that is substantially and directly related to making the decision, is done by the operation of such computer programs.

Next steps

If your organisation adopts or plans to adopt AI in your operations, whether customer facing or otherwise, it would be imperative to take action to ensure transparency in your privacy practices, not only for compliance purpose but also for building the trust of stakeholders.

Organisations who are required to comply with the Privacy Act should proactively implement the following measures to stay ahead of regulatory expectations:

  • Review privacy policies and collection notices to ensure AI use is appropriately disclosed.
  • Assess AI decision-making processes to determine whether automated decisions significantly affect individuals and, if so, ensure appropriate disclosures are included in privacy policies.
  • Train staff on AI-related privacy obligations and the potential impact of AI outputs on individuals.
  • Establish procedures for explaining AI-related decisions and outputs to affected individuals.

While this article focuses on transparency obligations, organisations should also remain mindful of other privacy law requirements. Staying informed of the OAIC's latest guidance on best practices, as well as broader AI safety standards, such as the Voluntary AI Safety Standards and proposed mandatory guardrails, will help ensure compliance and improve their AI maturity.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More