Recognising that Australia's existing laws and governance measures do not adequately address the risks presented by AI, the Australian Government advanced two key actions arising from its interim response to last year's consultation on Safe and Responsible AI in Australia on 5 September 2024.
The first development is the release of a Voluntary AI Safety Standard (Voluntary Standard), consisting of 10 voluntary guidelines that give practical guidance to Australian organisations on how to safely and reliably develop and deploy AI in Australia.
The second is the publication of a proposals paper introducing mandatory guardrails for AI in high-risk settings (High-Risk AI Paper). Public feedback is sought on defining high-risk AI and what mandatory guardrails are appropriate and how best to implement them. The paper is open for public consultation until 4 October 2024.
Further information on both developments is available below.
Voluntary AI Safety Standard
The Voluntary Standard consists of 10 voluntary guardrails that
apply to all organisations throughout the AI life cycle, including
in relation to testing, transparency and accountability. The
Voluntary Standard does not create new legal obligations. However,
compliance with the Voluntary Standard may help organisations
improve their AI maturity allowing them use AI systems more
effectively within the context of existing laws, regulatory changes
and meet stakeholder expectations.
The 10 guardrails can be summarised as follows:
- Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance. This includes allocating an owner for AI use, putting in place an AI strategy and developing training.
- Establish and implement a risk management process to identify and mitigate risks.This is to include stakeholder impact assessment and ongoing risk assessments.
- Protect AI systems and implement data governance measures to manage data quality and provenance. The aim here is to appropriate protect AI systems taking into account data quality, data provenance and cyber vulnerabilities.
- Test AI models and systems to evaluate model performance and monitor the system once deployed. The acceptance criteria for testing needs to consider the organisation's risk and impact assessment for the AI.
- Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle. Meaningful human oversight must be maintained to reduce the potential for unintended consequences and harms.
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content. It is important for organisations to disclose when AI is used, its role and when content is bring generated using AI.
- Establish processes for people impacted by AI systems to challenge use or outcomes. To allow individuals to contest decisions, outcomes or interactions that involve AI.
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
- Keep and maintain records to allow third parties to assess compliance with guardrails. It is suggested that organisations should maintain an AI inventory and consistent AI system documentation.
- Engage stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness. This is intended to be an ongoing process to be undertaken throughout the life of an AI system. A stakeholder is defined as an entity impacted by the decisions or behaviours of an AI system, such as an organisation, individual, community or other system.
The first 9 of the guardrails are described by the Australian Guidelines as being aligned to the proposed mandatory guardrails to apply to high-risk contexts as set out in the High Risk AI Paper.
The Voluntary Standard is intended to apply to those involved in AI development and deployment. An AI developer is an organisation or entity that designs, develops, tests and provides AI technologies such as AI models and components. A deployer is defined as an individual or organisation that supplies or uses an AI system to provide a product or service.
Deployment can be internal to an organisation, or external and impacting others, such as customers or other people who are not deployers of the system. The Australian Government has indicated that the current version of the Voluntary Standard is more focused on deployment but that the next version will expand on technical guidance for developers.
The Voluntary Standard adopts a human-centred approach to AI development and deployment making it in line with Australia's AI Ethics Principles and the Bletchley Declaration it signed in November 2023 where it agreed that AI should be "designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy and responsible".
The Voluntary Standard will continue to evolve alongside the broader activities underway by the Australian Government in relation to safe and responsible AI.
High-Risk AI Paper
In its interim response, the Australian Government also announced its intention to introduce 10 mandatory safeguards for the deployment of AI systems in high-risk contexts. The proposed guardrails proposed are the same as those from in the Voluntary Standard, except for guardrail 10 which is to "Undertake conformity assessments to demonstrate and certify compliance with guardrails". The High Risk AI Paper is an important step in aligning Australia with a global trend away from voluntary compliance.
The High Risk AI Paper seeks feedback on the following points:
- the proposed mandatory guardrails
- how to define high-risk AI and general purpose AI (GPAI) models, and
- how to implement the mandatory guardrails.
It is proposed that the mandatory guardrails would apply to the use of AI in high-risk settings, and in GPAI models.
Defining high-risk AI
The Australian Government proposes a two-pronged approach to assess whether an AI system may be viewed as high risk. The first is where the proposed use of the AI system are known or foreseeable; and the second category relates to GPAI models, where all possible applications and risks cannot be foreseen.
Where the proposed uses of the AI system are known or foreseeable, the following principles will be used to guide assessment on whether the use of AI system is defined as high-risk:
- The risk of adverse impacts to an individual's rights recognised in Australian human rights law without justification, in addition to Australia's international human rights law obligations
- The risk of adverse impacts to an individual's physical or mental health or safety
- The risk of adverse legal effects, defamation or similarly significant effects on an individual
- The risk of adverse impacts to groups of individuals or collective rights of cultural groups
- The risk of adverse impacts to the broader Australian economy, society, environment and rule of law
- The severity and extent of those adverse impacts outlined in principles (a) to (e) above.
Defining GPAI models
The Australian Government has also asked the question about how best to define general purpose AI models (GPAI models) and whether the mandatory guardrails should apply to all such models. Drawing on regulations proposed in Canada, the paper proposes the following definition for GPAI: An AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems.
Regulatory options to mandate guardrails
The final question considered is what legislative option would be best to adopt. The Australian Government has conceived 3 possible approaches:
1. A domain specific approach – This would involve adapting existing regulatory frameworks to include the guardrails. The process for this would involve a review of each relevant piece of legislation to address gaps and to embed relevant guardrails in existing regulatory frameworks to address the risks of AI outlined in this paper. Non-legislative options are also proposed as an option instead of legislative reform. Non-legislative options suggested include using the Office of Parliamentary Counsel's drafting directions or instruments like the Attorney-General Department's 'Guide to Framing Commonwealth Offences, Infringement Notices and Enforcement Powers'.
2. A framework approach – This would involve introducing new framework legislation to adapt existing regulatory frameworks. The framework legislation would define the mandatory guardrails to apply and define when they would apply. This framework legislation would also provide a uniform set of definitions and measures that would then be implemented through amendments to existing regulatory frameworks.
3. A whole of economy approach – This will involve the introduction of a new cross-economy AI-specific Act (for example, an Australian AI Act). The new Act would define the high-risk applications of AI and outline the new mandatory guardrails. It would establish a monitoring and enforcement regime overseen by an independent AI regulator. It is intended that the new regime would work alongside existing regulators to oversee the guardrails where there are gaps in existing approaches and to minimise duplication. Both the European Union and Canada are in the process of regulating AI through specific AI-specific legislation.
How we can assist
Momentum towards AI governance reform in Australia is gaining traction. Organisations looking to develop and deploy AI in Australia should familiarise themselves with the new Voluntary Standard. While the standard is not compulsory it is likely to be widely adopted and expected by stakeholders, and will help an organisation comply with any potential future regulatory requirements in Australia and emerging international practices.
Organisations using GPAI or AI models that are likely to be considered high-risk must also prepare for increased AI regulatory activity over the coming year. Participating in the current consultation period for the High Risk AI Paper offers an opportunity to shape future policy and regulatory change.
For those operating internationally, navigating regulatory differences across jurisdictions will also remain a critical consideration.
If you require guidance on the Voluntary Standard, the regulatory reform in high-risk contexts or broader compliance advice regarding the use of AI in your organisation, please contact us for assistance.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.