- in Australia
A Deloitte report for the Australian Department of Employment and Workplace Relations, compiled with the assistance of AI, was found to contain reference errors and a fabricated quote from a Federal Court judgment, underscoring the need for human oversight and stronger safeguards for this emerging technology, still in its infancy.
While AI offers potential cost savings, these must be weighed against new and sometimes significant expenses. As we have written about (linked), observed in practice, and noted in our interview with the SMH Sydney Morning Herald | Deloitte's dodgy report a sign of times as AI use ramps up: "savings made by adoption of AI have to be balanced against the added cost of double-checking the quality of the work." Deloitte will be required to provide a part refund of the AUD440,000 report over the issue.
This incident is not isolated and is unlikely to be the last. It reflects broader challenges organisations face in adopting generative AI amid growing regulatory expectations. We summarise some key principles which may assist in building a mature and responsible AI use ecosystem, especially for professional service providers.
1. Proper and responsible use of AI
Generative AI can efficiently draft text, summarise research, and generate insights. However, effective deployment requires a clear methodology. The Australian federal government's Voluntary AI Safety Standard, which sets out 10 voluntary AI guardrails, emphasise transparency, accountability, and human oversight. Organisations should disclose when AI is used, clarify its role, and ensure that workflows allow for meaningful human control and oversight at all stages.
Further, where personal information is used in connection with AI, the Office of the Australian Information Commissioner has issued guidelines on privacy and the use of commercially available AI products, underscoring the importance of clearly informing individuals about how their personal information is used when AI tools are involved, to ensure alignment with the Australian Privacy Principles under the Privacy Act 1988 (Cth) (Privacy Act).
Amendments to the Privacy Act will further reinforce these obligations by introducing mandatory requirements for organisations to include information in their privacy policies about how AI technologies use personal information for automated decisions that could reasonably be expected to significantly affect individuals' rights or interests.
The Deloitte incident demonstrates the risks when these standards are not fully integrated into practice.
2. AI is still in its infancy - so budget for growing pains
Despite enormous potential, generative AI remains an emerging technology. It can produce high-quality outputs efficiently but is also prone to errors, inconsistencies, and unpredictable behaviour. The technology is evolving rapidly, but current models can still generate misleading or factually incorrect information, especially when applied to complex or nuanced tasks.
Organisations, including professional service providers should treat AI-generated content as a first draft, subject to thorough human review and verification. It is important to recognise that the capabilities and limitations of AI are shifting as new models are released and as regulatory and industry standards develop.
This technical immaturity also carries a very real price tag that many AI transformation projects overlook:
- Enterprise licence fees for vertical or domain specific AI models, which are significantly higher subscriptions to public models, to access secure and auditable models.
- Training staff to effectively prompt, interpret, verify, and securely store AI outputs.
- Engaging vendors or consultants to fine-tune private models on proprietary data, as public models are often insufficient for professional use.
- Updating contractual terms, ensuring privacy compliance and implementing security controls to inform clients and customers about how AI is used and how their data interacts with AI.
- Ensuring that AI deployment is in established using industry best practice cyber-security standards and requirements (e.g. ASD essential eight, ISO/IEC 27001 & 27002, ISO/IEC 42001 and SOC 2 (Type II) and ISO 27018 certifications, among others)
Gartner's 2025 Hype Cycle, which details the adoption process of key emerging technologies,warns of a period of "disillusionment". As organisations move beyond the "peak of inflated expectations" phase, they will discover that that automation is not always simple or cost-free; practical challenges and limitations mean it is far from a simply plug-and-play exercise.
As AI models continue to improve, a cautious approach is the remedy: start with low-risk pilot tasks, implement appropriate safeguards, monitor error rates, and regularly update internal policies and training to reflect the latest developments. Broader implementation and scaling up can occur once confidence, safeguards (and budget) are established. Continuous monitoring and adaptation will be essential to ensure that AI tools are used responsibly and effectively as the technology matures.
3. The necessity of fact checking
Deloitte's report included citations to non-existent journal articles, a known risk of generative AI often referred to as "hallucinations." These errors can be identified through standard fact-checking processes. Previously, junior professionals performed this function. With AI now handling initial drafts, organisations must maintain or strengthen their verification processes to develop an AI appropriate quality assurance process. The cost of proper verification is minor compared to the potential reputational, legal and contractual consequences from publishing inaccurate information. In the context of legal practitioners, the Law Society of NSW's Solicitor's Guide to Responsible Use of Artificial Intelligence and Statement on the Use of AI in Australian Legal Practice further highlight the need to maintain accuracy, uphold client confidentiality, and ensure that AI-generated content is subject to human review and verification, and professional judgment. Ultimately, AI tools should never be a total replacement of professional judgement and advice.
4. Preserving client and citizen confidence
Trust is fundamental in professional services. Incidents such as the robodebt scheme controversy and the Deloitte report demonstrate the importance of maintaining public confidence in automated systems. Australians expect transparency and human oversight in decision-making processes.
Consistent with the Voluntary AI Safety Standards and privacy guidance, organisations should implement clear communication strategies, such as publishing detailed statements that explain how and why AI is used in their services.
Regular updates about changes in AI systems, data handling practices, and the safeguards in place can further strengthen confidence. Additionally, providing accessible channels for feedback and redress, and demonstrating a willingness to act on concerns, shows a commitment to accountability. Organisations should also consider audits of their AI systems to reinforce their dedication to responsible AI use.
5. Legal responsibility and professional liability
Existing Australian consumer, privacy, and negligence laws address most foreseeable risks associated with AI. The proposed introduction of Mandatory Guardrails in High-Risk Settings is currently under consideration by the Australian Government to strengthen accountability and safety.
For regulated professions such as in law, finance, and healthcare, ethical standards and professional obligations make human oversight and independent judgment essential when integrating AI into practice. If an AI system provides incorrect advice, the responsibility lies with the professional service provider, not the software vendor, so organisations must take proactive steps to ensure compliance with all relevant legal and ethical requirements. Training staff on responsibilities and documenting all decisions and processes related to AI use will help demonstrate compliance if regulatory or legal scrutiny arises.
In our experience, it is prudent to include specific AI-related clauses in contracts addressing issues such as model training, data sourcing, ownership of outputs, and audit rights to clarify responsibilities and manage risk.
Toward a mature AI culture
The Deloitte case provides an opportunity to foster a more mature and responsible approach to AI adoption. Taken together, these principles point the way to a mature AI culture for organisations seeking to benefit from AI that balances innovation with responsibility, safeguards trust, and ensures compliance. Those that invest in governance today will be best positioned to unlock AI's benefits sustainably and confidently in the coming years.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.