- within Finance and Banking, Tax, Food, Drugs, Healthcare and Life Sciences topic(s)
Key Takeaways
- Federal and state policymakers are beginning to integrate AI technology for use in National Environmental Policy Act (NEPA) reviews and permitting processes. A recent Department of the Interior (DOI) order envisions expanding AI use in agency decision-making, subject to safeguards like maintaining a "human in the loop."
- This initiative promises to expedite NEPA and permitting processes. But reliance on AI comes with some risks, both that materials underlying decision-making may contain errors and that project opponents may seek to exploit concerns about agencies' AI use as part of a challenge.
- Businesses involved in AI-facilitated agency decisions should seek to ensure that agency AI use is well documented, properly supervised, and follows recognized best practices.
The Secretary of the Interior, Doug Burgum, recently issued a Secretarial Order on Artificial Intelligence that addresses the use of AI across a number of domains, which include "energy and resource development" and "permitting efficiency." The Order asserts that DOI is "already seeing results," which include "streamlined environmental reviews." It directs DOI staff to ensure that DOI retains "oversight and accountability" and requires a "human-in-the-loop," a safeguard often applied in AI systems.
The Administration's efforts to expedite agency reviews and expand resource development and infrastructure projects have created increasing strain on agency resources, especially as agencies cut staff. DOI's AI initiative aligns with other Administration efforts to bridge this gap by streamlining agency processes and seeking to make those processes more efficient. It is also part of a broader Administration effort to support and enhance American AI dominance, as set forth in Executive Order 14179 and accompanying OMB guidance.
In April, a Presidential Memorandum, "Updating Permitting Technology for the 21st Century," directed agencies to "make maximum use of technology in environmental review and permitting processes." The Council on Environmental Quality (CEQ) then released a "Permitting Technology Action Plan." That document built upon CEQ's earlier "E-NEPA Report to Congress," which recommended technological options for streamlining NEPA processes. Several agencies have invested in related technologies, including the Department of Energy, the Federal Permitting Council, and the Air Force. States are also experimenting with AI tools, including a Minnesota project to streamline environmental permitting and a California project focused on permits for reconstruction after the Los Angeles fires.
AI models promise to simplify document drafting, data analysis, and review of public comments, potentially shortening federal review timelines. But their adoption raises concerns about error rates, bias, and explainability. For example, commenters suggested1 that the Trump administration's high-profile "Make America Healthy Again" report contained errors likely attributable to use of AI tools. Even in the absence of such errors, project opponents may seek to exploit concerns about the use of AI tools in litigation. It remains to be seen if and how courts will give deference to agency decision-making reliant on AI. Early adopters should ensure that contractors and agencies have safeguards in place and well documented so that the administrative record for any litigation will provide sufficient data and explanation of (human) decision-making to survive judicial review.
Next Steps
As public and private actors alike integrate AI into the NEPA process, businesses and project proponents should engage with agencies and support the use of recognized best practices 2 to mitigate legal and technical risks. These best practices include:
- Establishing and adhering to guidelines that ensure transparency and internal accountability.
- Ensuring that any use of AI tools is well documented and is explainable to third parties.
- Properly supervising AI use to prevent errors such as hallucinations, bias, and data gaps.
- Avoid the increasingly frequent disclosure of sensitive data by safeguarding confidential information when using AI tools.
Footnotes
1 White House Acknowledges Problems in RFK Jr's "Make America Healthy Again" Report. (2025, May 29). NPR. Retrieved from https://www.npr.org/2025/05/29/nx-s1-5417346/white-house-acknowledges-problems-in-rfk-jr-s-make-america-healthy-again-report
2 For example, the ABA has released a formal ethics opinion on generative AI and professional responsibility. See https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.