- with readers working within the Retail & Leisure industries
- within International Law, Real Estate and Construction and Strategy topic(s)
As 2025 draws to a close, one of the most consequential, but least publicly discussed, shifts in federal environmental governance has been the quiet expansion of artificial intelligence (AI) behind the scenes across multiple federal agencies. AI tools are not new in federal science programs, but 2025 marked a turning point: agencies began integrating machine-learning models into routine workflows in exposure modeling, surveillance, enforcement targeting, and environmental monitoring. The White House's January 23, 2025, Executive Order (EO), "Removing Barriers to American Leadership in Artificial Intelligence," was among the very first 2025 EOs signed by President Trump. The year was bookended by another. On Thursday December 11, 2025, President Trump issued the Executive Order, "Ensuring a National Policy Framework for Artificial Intelligence," which seeks to curb state level regulation of AI by asserting federal preemption, directing agencies to challenge or deter state AI laws, and pressing Congress to establish a uniform national framework. This move largely reiterates the level of commitment this Administration has to advancing AI.
This decentralized expansion has been pragmatic rather than ideological, reflecting operational pressures, staffing constraints, and the growing volume of data that agencies must accumulate, evaluate, and summarize. But the pace of adoption has now outstripped the development of clear policy guardrails to ensure unbiased and accurate AI infused products. As we move into 2026, the gap between AI use and AI oversight is becoming increasingly visible to regulated entities.
The U.S. Environmental Protection Agency (EPA) continued to expand its use of computational tools within the Office of Research and Development (ORD) and the Office of Chemical Safety and Pollution Prevention (OCSPP), and outlined EPA's own AI compliance plan in response to OMB Memorandum M-25-21. Machine-learning–enabled tools, such as the Open (Quantitative) Structure-activity/property Relationship App (OPERA) and updated read-across algorithms, played a more prominent role in screening-level assessments and data gap identification. EPA also continued modernizing its ToxCast/Tox21 computational toxicology systems, that increasingly incorporate statistical and machine-learning components to support hazard prediction. EPA's pesticide program has also spoken of how AI tools should help meet decision timeline targets for pesticide registration submissions.
In the enforcement context, EPA regional offices experimented with data-driven approaches to prioritize inspections and identify anomalies in emissions or waste-handling reports. These systems are not determinative — EPA emphasizes that inspectors and scientists make final enforcement decisions — but the early use of machine-learning enhanced triage tools suggests that data-driven targeting will continue to expand.
From a policy perspective, these developments highlight two recurring questions for stakeholders.
- When AI or Machine Learning (ML) outputs influence an assessment or inspection priority, how will EPA document those effects in administrative records?
- What opportunities will regulated entities have to understand, reproduce, or rebut the underlying models?
EPA has not yet issued comprehensive guidance on these issues. This widening void invites uncertainty about how these tools intersect with statutory transparency requirements under the Toxic Substances Control Act (TSCA), the Clean Air Act, and other programs, and how the application of these tools can be legally supported when disputes as to their legitimacy arise, as they will.
The U.S. Food and Drug Administration's (FDA) Office of Digital Transformation continued advancing AI-enabled tools for pattern detection in large datasets, including those used to evaluate contaminants in food contact materials and to identify emerging trends in food safety risks. FDA has been piloting an AI tool to optimize performance and accelerate drug review protocols, shortening the time needed for summarization of adverse events, performing label comparisons, and generating code for database development. FDA is currently using this AI program, Elsa, to streamline clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets. These tools allow FDA to apply risk-based principles more consistently while managing high-volume workloads. Despite these advances, like EPA, FDA has yet to define how AI-generated insights factor into regulatory decision-making. Stakeholders continue to seek clarity about the role these models play in guiding inspections, enforcement actions, or premarket evaluations.
Beyond EPA and FDA, other agencies have expanded AI use in ways that influence health and safety oversight:
- The U.S. Department of Agriculture (USDA) relies on AI and satellite-based ML models to assess crop health, detect land-use changes, and support wildfire risk forecasting — functions that overlap with climate, conservation, and compliance programs;
- The U.S. Department of Energy (DOE) uses AI in grid optimization, materials research, and energy efficiency modeling. AI-enabled prediction tools are informing grant allocations and infrastructure planning decisions; and
- The U.S. Department of the Interior (DOI) and U.S. Department of Transportation (DOT) have tested AI tools for habitat mapping, transportation risk modeling, and pipeline integrity assessments.
Across agencies, most deployments use federally developed or open-source ML frameworks rather than proprietary commercial platforms, reflecting both procurement constraints and the need for transparency. A consistent theme emerged in 2025: AI adoption is accelerating faster than the policy infrastructure needed to support it. Three gaps stand out, each of which has legal implications.
- Transparency and Reproducibility: Regulators increasingly rely on models that, while seemingly technically sophisticated, are not accompanied by clear documentation about data inputs, assumptions, or uncertainty factors. For regulated entities, this raises questions about how AI-generated insights can be reproduced, authenticated, evaluated, or contested.
- Administrative Record Integration: If AI tools influence screening decisions, prioritization, or assessment outcomes, agencies will need consistent protocols for describing those influences in administrative records. Without this, legal challenges could arise in both administrative and judicial review contexts.
- Cross-Agency Consistency: Different agencies are adopting AI at different speeds and under different standards. Without coordination, regulated industries may face a fragmented compliance landscape, with varying expectations about data transparency, model validation, and evidentiary weight.
Looking Ahead to 2026
The story of 2025 is not that AI entered the environmental governance group chat; it has been a growing and influential presence in research contexts for years. The big news is that AI has now migrated into operational decision-making without a corresponding evolution in the development of coherent and transparent governance policies. As agencies expand AI deployments in 2026 and beyond, regulated entities will increasingly seek clarity on how these tools affect compliance obligations, enforcement priorities, and risk evaluations.
Clear guidance on documentation, transparency, model validation, and reproducibility would help ensure confidence that AI can improve regulatory efficiency without undermining predictability or procedural fairness. Until then, AI deployment within health and safety and risk assessment spheres will remain a powerful but unevenly governed presence in the regulatory landscape
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]