The Road Ahead: Anticipated AI Export Controls in the AI
Action Plan
Jason Wilcox
As reported, on July 23, 2025, the White House released "Winning the AI Race: America's AI Action Plan," a strategy designed to secure U.S. leadership in artificial intelligence. This plan, which includes over 90 federal policy actions, aims to foster economic competitiveness and national security through AI dominance.
A key part of the plan, under its "Leading in International Diplomacy and Security" pillar, focuses on controlling AI and semiconductor technologies to prevent foreign adversaries from accessing advanced AI compute resources. This involves:
- Strengthening AI Compute Export Control Enforcement: Enhancing location verification to stop chips from reaching "countries of concern" and expanding monitoring, especially in high-risk areas.
- Plugging Loopholes in Existing Semiconductor Manufacturing Export Controls: Developing new export controls for component sub-systems not currently covered, likely including parts that could be combined to form controlled advanced systems.
- Aligning Protection Measures Globally: Implementing stringent export controls and encouraging allies to adopt similar measures. If allies don't align, the U.S. may use tools like the Foreign Direct Product Rule and secondary tariffs. The plan also advocates for a global AI alliance and expanding plurilateral controls to address AI proliferation more quickly than traditional treaty bodies.
- Promoting the Export of the American AI Technology Stack to Allies/Partners: Exporting the full range of American AI technology (hardware, models, software, applications, standards) to allied countries to meet global demand and prevent reliance on rival suppliers. An Executive Order signed in conjunction with the Plan further solidifies this proactive approach by establishing the American AI Exports Program within the DOC. This program will oversee a public call for proposals from industry-led consortia for full-stack AI technology packages, including hardware, data pipelines, AI models, security measures, and applications for specific use cases. This comprehensive effort aims to bolster U.S. economic growth, national security, and global competitiveness by strengthening alliances, promoting U.S. standards, and maintaining technological leadership in AI.
Read more in White House Unveils "America's AI Action Plan."
California Enacts Generative AI Framework Within
Courts
Ben Bafumi*
California just became the largest state to formally establish a comprehensive framework for the use of generative AI in judicial operations. California now joins several states, including Illinois and Delaware, in adopting policies regarding the use of AI, and may give other states considering similar policies, like New York, a friendly push to implement their own. The framework, set forth by the California Judicial Council, enacts Rule 10.430 and Standard 10.80, which cover court-related work that is generated, at least in part, by AI. These will take effect in September 2025, and by mid-December 2025, under Rule 10.430, courts that do not ban generative AI outright must adopt a policy applicable to the use of such technology for tasks by court staff, as well as tasks by judicial officers acting outside their adjudicative role. The rule posits a model policy for courts to implement verbatim (or to draft a "substantially similar" version). For judicial officers acting within their role, Standard 10.80 sets similar guidelines. Notably, a judicial officer's "adjudicative role" is undefined, and currently left to self-governance and interpretation.
Generally, these policies focus on confidentiality and privacy, bias and non-discrimination, accuracy and correction, transparency and disclosure, compliance and oversight, and accountability. These guidelines are more concerned with combating risks than prohibiting or policing particular uses, and are partially motivated by case law "hallucinations" and generally inaccurate AI-generated outputs, which have notoriously been undermining public trust in the judicial system. The new California framework suggests a broader trend toward more concise (and perhaps stricter) AI regulation and governance throughout the country.
[*Ben Bafumi is a law clerk
at Baker Botts]
For more thought leadership on
state AI regulations, see the below articles:
Senate Strikes AI Moratorium: What It Means for
State Regulation
Texas Enacts Responsible AI Governance Act: What
Companies Need to Know
Governance of Agentic Artificial Intelligence
Leslie Roussev
Agentic artificial intelligence is rapidly transforming the landscape of decision-making and automation, but not without its challenges. AI agents are systems or programs that are capable of autonomously performing tasks on behalf of a user or another system with minimal human oversight. As these systems grow more prevalent and sophisticated, governance must keep pace to effectively manage and mitigate the corresponding risks.
In such a nascent field, three main guardrails are key for agentic AI governance. First, organizations should implement the same foundational standards essential to the deployment of all AI systems, such as privacy, transparency, explainability, and safety. Second, organizations should adjust the degree of governance over AI agents based on the level of risk they present. Lower-risk agents require a lighter touch, while systems that operate in sensitive contexts or are subject to industry-specific regulatory compliance, for example, may require more robust governance mechanisms and oversight. Finally, societal guardrails help mitigate risks that agentic AI can have a negative impact not just on organizations, but also on entire industries, communities, and the environment. Such guardrails include implementing ethical design processes, adequate training, incident response systems, and public policy engagement.
For further reading, see AI Governance in the Agent Era.
New Code of Practice for General-Purpose AI Offers
Compliance Roadmap for EU AI Act
Joe Cahill
The European Commission has received the final version of a voluntary Code of Practice for General-Purpose AI (GPAI) models. This Code of Practice provides detailed, actionable principles for providers of general-purpose AI models, establishing best practices for safety, security, transparency, and copyright protection. The Code of Practice, drafted with input from a wide range of stakeholders, is designed to operationalize the core tenets of the forthcoming EU AI Act and guide providers on responsible AI development and deployment in the interim period before the Act's full implementation.
The Code of Practice is structured around three core pillars. The "Safety and Security" chapter requires providers to implement a robust risk management framework for the entire model lifecycle, with a particular focus on models that could pose systemic risks. The "Transparency" chapter mandates detailed documentation on model training, capabilities, and limitations, which must be made available to the EU's AI Office, national competent authorities, and downstream providers. Finally, the "Copyright" chapter obligates providers to implement policies ensuring that training data is lawfully sourced and to use technical safeguards to prevent the generation of infringing content.
For businesses developing or deploying AI solutions within the European Union, this Code of Practice is a critical tool. Adherence is currently voluntary because the corresponding obligations of the EU AI Act have not yet taken effect. However, with the Act's rules for general-purpose AI going into effect on August 2, 2025, this code provides the clearest blueprint for what will constitute compliance. Adopting these standards now is a way to prepare for and signal future conformity with the law. Businesses should view this framework as an opportunity to proactively align their internal governance, risk management, and documentation processes with the principles that will soon define the legally binding landscape for AI in the EU.
Check out our EU AI Act Compliance Quick Guide for a high-level overview.
USPTO Introduces New Internal AI Search Tool
Coleman Strine
The USPTO recently announced that it has developed a new AI tool to assist examiners in conducting design patent searches. The tool, called "DesignVision," complements an array of existing USPTO AI tools, such as "Similarity Search," which allows examiners to conduct utility patent searches based on specifications and other patent information. Similarity Search was announced in 2022, and the USPTO is expected to require all Examiners to use the tool by September. Similarly, the USPTO announced that DesignVision will be mandatory for Examiners starting in October.
Prospective patent applicants can expect more and higher-quality prior art references (including foreign references) to be raised by Examiners once these tools are fully integrated into the examination process. Accordingly, this AI-assisted examination process may result in narrower, but stronger patent claims, which would be more robust to post-grant challenges. On this point, a spokesperson for the USPTO stated, "modernizing examination tools will lead to stronger and more reliable patent rights, which America will need to maintain its dominance in critical emerging technologies, including AI and quantum computing."
For more on the complex challenges AI is presenting in patent law, read Patent Obviousness in the AI Era.
Baker Botts Named #1 Law Firm for Business and Innovation by Bloomberg Law
In its inaugural Leading Law Firms survey, Bloomberg Law has recognized Baker Botts for our innovative approach, which includes the use of generative AI tools and comprehensive data warehousing, revolutionizing firmwide operations. Read more about this prestigious recognition here.
Quick Links
For additional insights on AI, check out Baker Botts' thought
leadership in this area:
- White House Unveils "America's AI Action Plan": An overview of the 28-page Plan's structure, its possible implications, and key takeaways.
- U.S. AI Chip Policy: A Post-Recission Forecast: A look at the evolving U.S. export control policies and the corresponding responses from international partners, along with implications for companies.
- Texas Senate Bill 6: Understanding the Impacts to Large Loads and Co-located Generation: What "SB 6" means for consumers with high electricity demand (currently ≥ 75 MW)—referred to as "Large Loads"—in the state.
- Texas Enacts Responsible AI Governance Act: What Companies Need to Know: A thorough review of what companies need to know under the newly enacted Texas Responsible Artificial Intelligence Governance Act (TRAIGA).
- Senate Strikes AI Moratorium: What It Means for State Regulation: The removal, will allow states to continue their push for AI regulation, at least until (and if) the federal government adopts preemptive legislation.
- Patent Obviousness in the AI Era: A look at how susceptible the obviousness inquiry under 35 U.S.C. § 103 may be to disruption by advances in AI technology
- "Our Take" Quick Reads:
—President Releases "America's AI Action Plan"
—CNIL Published Recommendations on Application of GDPR to Artificial Intelligence - AI Counsel Code: Stay up to date on all the artificial intelligence (AI) legal issues arising in the modern business frontier with host Maggie Welsh, partner at Baker Botts and Co-chair of the AI Practice Group. Our library of podcasts can be found here, and stay tuned for new episodes coming soon.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.