The NSW Government recently announced its intention to become the first Australian state to use AI to fast-track large scale housing development approvals. Use of AI to automate and speed up EPBC decision making has also been raised in the context of ongoing reform proposals.
As the public and private sector continue to grapple with effective, secure and appropriate use of generative AI, this article considers some key opportunities and important guardrails for such AI use in environmental impact assessment and development approvals.
Snapshot
- Use of AI is being considered in NSW with the aim to fast-track approvals of state-significant housing developments
- Use of AI in EPBC Act environmental impact assessment processes is being explored, but raises different complexities
- There are particular implications for government decision makers where decisions rely on AI, including transparency, explainability, accountability, algorithmic bias and risk of challenge to unlawfully made decisions
- We have also identified some key considerations for proponents using generative AI tools to support approval applications
- Use of AI itself has environmental impacts, having regard to energy and water demand
NSW proposal: planning assessment for large-scale housing
The NSW State government recently confirmed its ambition to use AI technologies to support speedier assessment of state-significant development applications, with a particular focus on delivery of housing.1 Following the Early Adopter Grants Program which assisted 16 Councils to trial AI solutions to improve local planning processes, the NSW government has now launched a tender for development of a new AI system that would:
- Conduct an intelligent review of documentation before lodgement;
- Accurately assess applications against key criteria;
- Reduce overall assessment timeframes (with SSD assessments reported to currently spend 3 months in Government hands of an overall 8.5 month assessment timeframe); and
- Complete post-submission checks to accelerate finalisation.
EPBC proposal: environmental law reforms to include AI use in assessment and approvals
During the August 2025 Economic Reform Roundtable, the federal Ministers for the Environment and Water and Housing, Homelessness and Cities jointly announced an intention to accelerate approval of significant housing proposals under the EPBC Act (the federal environmental impact assessment regime). In addition to resourcing and process reform, a key objective was 'piloting AI to simplify and speed-up assessments and approvals'.2
Following the Roundtable, the federal Minister for the Environment announced3 that EPBC Act reforms will be introduced to Parliament in late 2025. It is indicated, but yet to be confirmed, that the AI pilot would relate to a broader suite of EPBC proposals rather than just the housing sector.
Artificial intelligence (AI) and generative artificial intelligence (Gen AI)
The Australian Government Policy for the responsible use of AI in government (September 2024) (AI Policy)4 defines an AI system using the OECD definition5 as follows:
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
The stated objective of the AI Policy is 'to ensure that government plays a leadership role in embracing AI for the Benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations,' and emphasises the need for the policy to evolve and develop over time.
Different considerations arise depending on what type of AI technologies are used, the inputs, its context, and the way the AI output is assessed and implemented by humans. The Planning Institute Australia Guidance Note AI in Development Assessment (Feb 2025)6 recognises that rule-based AI (effectively automated decision-tree assessment) provides clear parameters and traceability, whereas more creative tasks and machine-learning can involve complexity that make it difficult to track the factors that have influenced an outcome in any particular instance.
We agree that AI has great potential to support efficient decision making, including through initial assessment against clearly defined criteria, and synthesis of complex or large data sets. However, in planning and environmental decisions where judgment is required, human decision making, often by officials who are accountable to the electorate, remains a foundational principle. AI systems must support, rather than replace, that process. In addition to the standard rationale for review and recalibration of AI system outputs, such as potential to entrench bias and produce incorrect or incomplete outputs (often called hallucinations), it is important to recognise in planning decisions that the social and environmental context for decision making is continually evolving. Any AI system will need to be continuously updated to reflect the appropriate balance of factors in sustainable development over time, including in very localised communities.
Government response to ethics and governance in the use of AI
There are a range of existing government policies addressing ethical and responsible use of AI in the public sector. These include:
- Australia's AI Ethics Framework of 2019 with 8 (voluntary) AI Ethics Principles;7
- the joint approach of the Federal, State and Territory governments set out in the National framework for the assurance of AI in government (June 2024)8;
- the Voluntary AI Safety Standard with 10 guardrails around development and deployment of AI in high-risk settings. Our Technology team has also previously reported on this and the proposal to introduce mandatory guardrails for AI in high-risk settings, which may include areas relevant to planning and environmental approvals processes in some circumstances.9
- the Federal AI Policy referred to above, and the response of each relevant federal government entity as mandated under the AI Policy. For example, the Department of Climate Change, Energy, the Environment and Water is currently responsible for implementation of the EPBC Act and published its AI Transparency Statement in February 2025.10
National governments around the world are grappling with whether and how to impose new AI regulations. Following extensive review, we are awaiting the Australian government's position on AI regulation. Indications are that Australia will adopt a lighter touch, pro-innovation and pro-productivity approach, as compared to, for example, more restrictive and comprehensive European regulations.
In NSW specifically:
- The NSW AI Assessment Framework (NSW AIAF)11 is mandatory for all NSW government agencies, reflecting the AI Ethics Policy12; and
- The NSW AIAF has been embedded into the NSW Digital Assurance Framework.
- The NSW Government has provided additional basic guidance on use of generative AI within government.13
Key considerations for use of AI in planning and environmental approval processes
A threshold question is whether explicit legislative authorisation would be necessary to allow use of AI or other computer-generated material as part of a planning or environmental decision-making process, and what conditions should be imposed on that use in light of the potential risks associated with AI. For a range of Commonwealth administrative decisions, there are already explicit provisions that allow decision makers to rely on material generated by computer programs, or for a computer program to itself make decisions.14 The safeguards built into those provisions may include for example 'reasonable steps' to ensure consistency with the objects of the relevant legislation and permissible grounds for the decision, and satisfaction on 'reasonable grounds' as to the accuracy of the information entered into the computer system, and that it has been entered into the system correctly. While these may also be relevant to use of AI, the safeguards currently in place do not, in our view, fully implement all the potential guardrails foreshadowed in the AI Policy and ethics frameworks in the way that would be necessary for use of AI in further decision making.
We can see significant potential for rule-based AI systems to streamline aspects of the process for planning decision making along the lines foreshadowed by NSW and where the use-case is limited to a specific and replicable type of development, for example:
- Assessment of state-significant development house proposals against technical application criteria, which may identify any information gaps early and facilitate faster confirmation of valid application and 'start the clock' for statutory timeframes to approval; and
- Assessment against prescribed decision-making criteria such as setback or height limits which, although not the whole basis for an approval decision, may facilitate a human decision maker to focus on other criteria that require weighing of competing considerations or application of discretion.
However, it would be important that:
- In all cases, a human must be responsible for, and accountable for, the ultimate approval decision. Although media releases currently refer to use of AI in 'approvals', we interpret this currently as referring to the assessment process that informs approvals and in our view, substantive law reform (and arguably social reform) would be required for any change where a formal decision on a major development project is made by an AI system rather than by a human;
- Data input into the system must be relevant to its specific context (eg Australia, or the relevant region), up to date and consistent as possible, and in all cases the outcome of each AI intervention in an assessment process must clearly articulate the data on which it has relied to reach that outcome, so that the process is transparent and can be explained. This is critical to ensure that decisions are being made based on data sets relevant to Australia and the planning context, and to avoid hallucinations (or perceived hallucinations) and for ultimate accountability of the human decision maker;
- Where AI has identified a problem that that prevents an application from proceeding at any stage, it must be reviewed by a human to ensure that the issue has been correctly identified;
- Where AI has identified that the information is sufficient, there should be a requirement for robust justification relevant to the decision if a human decisionmaker later wishes to seek further information in a way that would delay the application – transparency is key. This would allow applicants to have confidence that AI is streamlining and not adding to the procedural hurdles;
- Mechanisms should be put in place to ensure that biases in the system from time to time are identified and removed;
- The statute should embed a governance process which provides for independent review at regular intervals, and ongoing auditing, so identify issues in the use of AI and make recommendations for improvement, so as to ensure that processes remain lawful and up to date given rapidly evolving technology and practices regarding the use of AI in decision-making;
- Statements of reasons must be able to clearly explain the use of AI and the information and reasoning relied on, as relevant to the decision made; and
- There should be an efficient and low-cost dispute mechanism if an applicant disagrees with an AI-influenced decision on any step in the process. This should be separate from, and additional to, the existing processes for appeal of the final approval decision, and should recognise that functionality, transparency and accountability for AI-supported processes is nascent and evolving, such that pragmatic and timely resolution of any glitches is critical to ensuring successful implementation of AI systems.
More sophisticated AI analysis may also assist at a systems level, for example analysis of past applications and processes to identify common issues or bottlenecks. It will be interesting to consider over time whether AI assists to identify reforms that would facilitate greater involvement of AI in the assessment process, facilitating further efficiency.
Some of these opportunities similarly exist for environmental approvals applications, however we have some concerns that the increased complexity of interactions between environmental considerations (including social impacts) is less amenable to reliance on AI assessment as we currently understand and use AI systems. For example, use of AI to undertake risk analysis or predict potential impacts of a project to inform EPBC Act decision making by a regulator on a specific project may be problematic, and automated generation of a likely outcome based on precedent in past decision making would be highly problematic.
A key risk is potential challenge to approvals decisions by applicants or by objectors if there is a lack of transparency and trust in the underpinning AI models, training data sets and tools used, and the inputs and prompts into the AI system to produce the approval decision. This could arise where the AI generated output is itself flawed which undermines the lawfulness of the ultimate decision, or where the decision maker has relied on the AI generated output in a way that undermines the lawfulness of the decision. For example:
- if the data on which the underlying AI model is trained is outdated or flawed, incomplete or simply not adequate or appropriate for the relevant Australian context, then this may undermine the validity of the AI produced analysis;
- if any data or prompts inputted into the AI system is outdated, flawed or incomplete, and that undermines the validity of the AI produced analysis;
- if materials relied upon in the decision making process have been generated or summarised by AI and not checked against the details of the specific application;
- if there are concerns that the underling AI model has an inherent bias or incomplete training data to the relevant Australian planning domain, or if the AI system in use has developed a bias, then any AI produced analysis may be flawed;
- if past decisions relying on AI assessment have been demonstrated to be flawed and the public does not have confidence that the outcome has been rectified; or
- if the decision maker has unduly relied on AI analysis rather than reaching their own conclusion based on the statutory decision making criteria.
In all cases, it will be necessary to resolve concerns around reliance of government on AI systems developed by the private sector, including in relation to probity, privacy, and ownership of intellectual property rights in materials used to train the underlying AI model or system and of course that reviewed by the AI system in producing the relevant output. This includes building public confidence that government departments relying on AI have the relevant skills to understand, manage and audit their AI systems effectively, and ensure that the nuances of Australian regulations are embedded into the AI models and training data.
It may be that, ultimately, law reform or specific policy development is required to guide use of AI in planning and environmental decision making, linked to grounds on which applicants or third parties may or may not challenge the subsequent approval decisions.
Use of AI by developers and applicants
Although the focus of this article is primarily on use of AI in government decision-making, it is important to recognise that developers and applicants also have increasing opportunity to use AI to support impact assessments, generation of application materials, and pre-application compliance checks.
When using AI to support the making of an application, we recommend that applicants:
- Consider privacy obligations, confidentiality and data sovereignty concerns, including whether information about surrounding community as well as your commercial information is identifiable and might be re-used in ways you can't control. In particular, we recommend avoiding entering any personal, sensitive or commercial in confidence information or data into any public AI platform.
- Check accuracy by a human – we consider it is important that where AI is used to assist in preparation of an approval application, that this is checked by a person, given obligations under State and Federal legislation. For example, it is an offence under State and Federal legislation to provide false or misleading information in relation to an approval (eg section 10.6 of the Environmental Planning and Assessment Act 1979 (NSW) and section 489 of the Environment Protection and Biodiversity Conservation Act 1999 (Cth)).
- Ensure that consultant engagements require them to identify where they've relied on AI including generative AI to prepare their impact assessment or report.
- If using AI to generate content, ensuring that this is based on the specific reports and circumstances of the project, as content generated on the basis of past projects/precedents or predicted content is likely to be problematic (see above regarding the making of false representations).
- Pay particular attention to summaries / application of the law generated by AI, which can be mis-applied or incorrectly summarised by AI and make sure the AI system knows all the definitions in the relevant legislation/caselaw to ensure that defined words and phrases are being correctly applied. We suggest, as stated above, that this is always reviewed by a person.
- For any expert witness statements or appearance in any public hearing different rules apply depending on the jurisdiction, so it will be necessary to check the rules applicable to that court or forum regarding proactive disclosure of use of AI or Gen AI, or transparency if the question is asked.
- Where lawyers become involved, they will also be aware of their own obligations and the risks of using AI to generate or review caselaw summaries or submissions.
Environmental impacts of use of AI
Finally, it is relevant to note that the use of AI also has potentially significant environmental impacts as elaborated in chapter 6 of the Report of the Select Committee on Adopting Artificial Intelligence (November 2024)15, in particular the significant amount of energy use required for AI functions, as well as water use and broader land use impacts including mining critical minerals for hardware components and waste management.
Although these matters already fall within a range of regulatory systems it is nevertheless worth considering carefully in the context of planning and environmental decision making, where it is important that the regulatory systems themselves also support, and do not undermine, the overall objective of sustainable development.
Footnotes
4. Policy for the responsible use of AI in government Version 1.1 (September 2024), Australian Government Digital Transformation Agency, available at https://www.digital.gov.au/policy/ai/policy
12. https://www.digital.nsw.gov.au/policy/artificial-intelligence/artificial-intelligence-ethics-policy
13. https://www.digital.nsw.gov.au/policy/artificial-intelligence/generative-ai-basic-guidancesn
14. For example, section 541A of the Biosecurity Act 2015 (Cth) and legislative instrument Biosecurity (Electronic Decisions) Determination 2023 (13 December 2023)
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.