- within Technology, Environment and Law Department Performance topic(s)
- in Asia
This week, the Australian Government announced (link) plans to establish a national AI Safety Institute (AISI) in early 2026, marking a significant step toward ensuring the safe development and deployment of advanced artificial intelligence (AI) systems in Australia.
AISI will work alongside existing regulators and is expected to
provide transparency and technical rigour in its assessments of
advance AI systems to ensure that Australians can feel safe when
using AI.
This initiative mirrors similar efforts overseas –
particularly of signatories of the 2024 Seoul AI Summit declaration, who have
committed to strengthen international cooperation on Al governance
through engagement with international initiatives. AISI will join
other similar bodies in the International Network of AI Safety
Institutes.
The United Kingdom's equivalent institute, the AI Security Institute (UK AISI), was established in 2023 and operates as a government-backed research body. UK AISI's mission is to minimise risks from rapid technological advances and has three core functions: technical evaluations, foundational AI research, and facilitating information exchange.
AISI is expected to adopt a similar focus on technical
evaluation and information sharing, likely focusing on independent
testing and evaluation of advanced AI systems, developing safety
frameworks, and international co-operation on AI risk management.
AISI will form part of a wider national AI strategy, complementing
the work of the National AI Centre, by providing practical and
trusted guidance based on in depth research. Further guidance on
that strategy is expected to be published in Australian
Government's National AI Capability Plan in the coming
weeks.
What this means for businesses:
- Australian regulators, supported by AISI, will have increased in-house expertise and capability to actively test and validate safety of individual AI systems, meaning internal safe AI practices will need to be technically robust;
- Organisations can expect more detailed technical guidance on risks of advanced AI systems and steps to mitigate the risks, helping them to deploy those systems in a safer manner;
- Organisations should monitor AISI's output which should provide actionable guidance on best practice approaches to establish safe AI systems, such as enhanced testing methodologies, error detection, record keeping and cyber-security best practice;
- Best practice in AI in Australia will increasingly mirror that in other AI advanced jurisdictions.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.