Generative artificial intelligence (GenAI) has become increasingly popular in Australia, with many workplaces beginning to trial its use for day-to-day tasks. Despite its potential benefits, there are real, and potentially serious, privacy risks associated with the misuse of GenAI.
These risks are a key focus for the relevant regulators, with the Office of the Victorian Information Commissioner (OVIC) being the first privacy commissioner in Australia to make findings in relation to an employee's use of GenAI in the workplace.
OVIC's investigation into the use of GenAI in the workplace
OVIC commenced an investigation into the Victorian Government Department of Families, Fairness and Housing (DFFH) after DFFH notified OVIC (in December 2023) about a potential privacy incident involving one of its employees using ChatGPT to draft a 'Protection Application Report' (PA Report). The investigation report for this incident was published on 24 September 2024.
A PA Report is completed when Child Protection (which is part of DFFH) has concerns about the safety or welfare of a child. It is then submitted to the Children's Court to help the Court make a decision about the child's needs, including whether the risks to the child's safety or welfare are sufficiently serious that they should be removed from their parent's care. PA Reports often contain highly sensitive information, including an overview of the concerns with a child's safety, and a risk assessment.
OVIC's investigation concerned a PA Report about a child whose parents had been charged with sexual offences (although the offences were not in relation to the child). During Child Protection's management of this case, one of its employees was tasked with preparing a PA Report outlining the reasons for and against the child being removed permanently from the child's parents' care. This PA Report was signed off by the employee's supervisor and submitted to the Children's Court.
A DFFH legal representative reviewed the PA Report about a week later when preparing for a court hearing. When reviewing the PA Report, the legal representative suspected that it was drafted using GenAI. This was due to the presence of unusual and inappropriate language and sentences in the PA Report, and the inclusion of highly inaccurate information - the risk of 'hallucinations' is a known risk with the use of certain GenAI.
During DFFH's initial internal investigations, the employee admitted that they had used ChatGPT to prepare this PA Report and other PA Reports. While the employee denied inputting personal information into ChatGPT, other employees indicated that they had witnessed the employee inputting names into ChatGPT on several occasions.
The potential misuse of personal information in connection with the use of ChatGPT resulted in OVIC treating this incident as a potential breach of the Information Privacy Principles (IPPs) under the VictorianPrivacy and Data Protection Act 2014(Vic) (PDPA).
OVIC's findings
OVIC's investigation focused on two potential breaches of the IPPs:
- IPP 3.1- An obligation to take reasonable steps to make sure that the personal information it collects, uses or discloses is accurate, complete and up-to-date.
- IPP 4.1- An obligation for an entity to take reasonable steps to protect the personal information it holds from misuse and loss and from unauthorised access, modification or disclosure.
While not a focus of its investigation, OVIC noted that the employee's use of ChatGPT could have also amounted to a contravention of other IPPs.
OVIC made the following findings on these issues:
- Breach of IPP 3.1 (Accuracy of personal
information). IPP 3.1 applies whenever a relevant State
entity collects, uses or discloses personal information.
Importantly, information can be personal information even if it is
untrue (e.g. the comments in the PA Report which were a result of
ChatGPT's hallucinations).
In line with OVIC's previous guidance, OVIC expressed the view that the employee's use of ChatGPT to generate the PA Report constituted a 'collection' of personal information and was therefore subject to the IPPs. The use of the PA Report internally and disclosure to the Children's Court was also subject to the requirements under IPP 3.1.
In order to comply with IPP 3.1, an entity must take reasonable steps to ensure that the personal information it collects, uses and discloses is accurate, complete and up to date. While DFFH had policies in place to help comply this obligation (e.g. an acceptable use of technology policy) and staff privacy and security training, OVIC considered that these measures were insufficient.
In OVIC's view, additional steps were required given the nature and sensitivity of information handled by Child Protection, and the potential for severe consequences to arise (e.g. to vulnerable children) due to the handling of inaccurate information. OVIC indicated that for DFFH to have complied with this obligation, it should have implemented other measures, including:- clear guidance to all relevant employees regarding the use of GenAI, how GenAI tools work, and the privacy risks associated with them. This guidance should have been sufficient for staff to understand when and how to appropriately use GenAI tools;
- staff training which would equip staff with the skill and knowledge needed to operate GenAI tools in line with privacy requirements;
- specific departmental rules about when and how GenAI tools should or should not be used; and
- technical controls to restrict access to tools like ChatGPT.
- Breach of IPP 4.1 (Unauthorised disclosure of personal
information). OVIC also expressed the view that the Child
Protection employee's input of information into ChatGPT
amounted to a failure to take reasonable steps to protect personal
information from unauthorised disclosure.
Similar to the alleged contravention of IPP 3.1, OVIC found that DFFH's policies and training were insufficient to protect personal information from unauthorised disclosure. This was because these policies lacked clear and direct guidance on when it would be appropriate to use GenAI tools, what information can be input into GenAI tools, and an explanation of the privacy risks.
Further, OVIC expressed concern that DFFH had no technical means of verifying if employees were inputting personal information into ChatGPT, making it impossible to identify any actual instances of unauthorised disclosures of personal information to OpenAI (the maker of ChatGPT). The lack of these steps resulted in the likely unauthorised disclosure of personal information to OpenAI about children and their parents subject to child protection investigations.
- Other possible breaches of the IPPs.OVIC noted
that the employee's use of ChatGPT could have also amounted to
a contravention of other IPPs but did not provide a definitive view
on whether such a contravention was established. For example:
- IPPs 1.1 and 1.2: These IPPs prohibit the unnecessary and unfair collection of personal information. Arguably, the collection of content from ChatGPT may have been 'unnecessary' and 'unfair' given the inaccuracies in the generated content.
- IPP 2.1: This IPP prohibits the use and disclosure of personal information for unauthorised secondary purposes. Arguably, uploading personal information to ChatGPT could be considered an unauthorised use or disclosure (depending on the circumstances).
- IPP 9.1:This IPP imposes requirements regarding the transfer of personal information outside of Victoria. Given OpenAI is based overseas, the disclosure of personal information when using ChatGPT may have also breached this obligation.
- OVIC's orders. OVIC considered the breaches to be 'serious, repeated or flagrant' which enabled it to exercise its power to issue a compliance notice under section 78 of the PDPA. Given the potential for ongoing privacy risks until effective controls regarding the use of GenAI tools were implemented, OVIC ordered DFFH to prevent Child Protection from using all GenAI tools until 5 November 2026 (including by implementing IP blocking and/or Domain Name Server blocking).
Key takeaways
While OVIC's investigation relates to breaches of the IPPs under the Victorian PDPA, there are equivalent Australian Privacy Principles (APPs) under the FederalPrivacy Act 1988 (Cth) (Privacy Act) which apply to most businesses and Commonwealth agencies. The OVIC investigation may shape how these APPs are interpreted where APP entities use ChatGPT in the workplace.
Some of the key steps organisations should consider before using GenAI in the workplace for certain AI applications include:
- Conduct a risk assessment of any proposed use cases. There are various scenarios in which the use of GenAI could have serious consequences for individuals. For example, if GenAI was used in the recruitment context, any bias or hallucination about a candidate may result in an employer improperly disregarding a job application. To help mitigate these risks, entities should consider conducting a human rights assessment for certain use cases - in particular 'high risk' use cases. This human rights assessment should identify the potential human rights impact of relying on the output of GenAI tools and how to mitigate those risks. Entities should also consider conducting a privacy impact assessment whenever personal information may be inputted into, or generated by, the GenAI tool.
- Implement a policy for when GenAI can be used. One of the key concerns with Child Protection's use of GenAI was the potential for severe consequences to occur if inaccurate information was collected, used or disclosed (e.g. if a 'GenAI hallucination' resulted in a vulnerable child remaining in a dangerous situation). Organisations should consider implementing a policy for when GenAI tools can and cannot be used or where there must be human oversight.
- Staff training, guidance and awareness. One of OVIC's key concerns in the DFFH investigation was that there were insufficient controls to ensure information generated from ChatGPT was accurate, correct and up-to-date, and not subject to unauthorised disclosure. Once an organisation confirms when and how GenAI tools can be used, it is important to clearly explain the permitted (and prohibited) use cases to all staff. There should be clear awareness and understanding of how GenAI tools work and the privacy risks of using them, and what should be done to help ensure the privacy of individuals is protected when using GenAI tools. It would also be prudent to regularly audit how employees are using GenAI tools to ensure the employees are complying with the relevant policies (i.e. not using GenAI tools for prohibited use cases and not inputting personal information into the GenAI tool where not permitted).
- Directions of information which can be inputted into GenAI. One privacy mitigation step an entity can take when allowing its employees to use GenAI tools is to clearly explain what information can be inputted into the GenAI tool (e.g. directing employees to remove details which could cause the input to contain 'personal information'). However, this step alone will not mitigate all privacy risks - content generated by the GenAI tool may be personal information in the entity's hands given the other personal information the entity would hold. The fact that generated content may still contain personal information may result in the entity needing to comply with the other requirements under the Privacy Act.
- Consider use of "closed" GenAI tools. The free version of ChatGPT involves the sending of any prompts to OpenAI (which may be an unauthorised disclosure of personal information or an unauthorised cross-border disclosure of personal information). Closed GenAI tools which store information solely within the organisation's systems and do not disclose the data to any third-party may help prevent an unauthorised disclosure of personal information. However, additional steps may need to be taken to ensure compliance with other obligations under the Privacy Act (e.g. to ensure the accuracy of any personal information collected from the GenAI tool).
- Contractual clauses and due diligence on GenAI providers. Before appointing a GenAI provider, entities should conduct due diligence on the provider and its tool, including seeking to understand how the GenAI tool was trained (including whether the GenAI provider had the necessary rights to train the tool using the relevant training data), and the steps taken to reduce the risk of biased or inaccurate information generated by the GenAI tool. Following this due diligence process, it may be appropriate to seek appropriate clauses in the contract with the GenAI provider outlining each party's rights and responsibilities, and who will accept responsibility for certain outputs (e.g. inaccurate data). This will be dependent on the GenAI model being implemented.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
Lawyers Weekly Law firm of the year
2021 |
Employer of Choice for Gender Equality
(WGEA) |