ARTICLE
9 October 2025

AI Governance Best Practices For Legal Teams

OG
Outside GC

Contributor

OGC is a unique law firm that offers the relationship and experience of a traditional law firm with the cost savings and speed of an ALSP. By combining top-notch legal talent and significant business acumen, we deliver the value and efficiency of an in-house lawyer, without adding to our client’s headcount or sacrificing quality.
Over the past decade, AI has shifted from a nascent technology to a mainstream business driver, raising new questions about trust, accountability, and compliance.
United States Technology
Caroline McCaffery’s articles from Outside GC are most popular:
  • within Technology topic(s)
  • with Inhouse Counsel
  • in United States
  • with readers working within the Technology and Transport industries

Over the past decade, AI has shifted from a nascent technology to a mainstream business driver, raising new questions about trust, accountability, and compliance. I've had a front row seat to this evolution, first at an AI company pioneering computer vision, and later as the founder of a natural language processing startup. These experiences not only deepened my technical understanding but also sharpened my perspective on the ethical and governance challenges AI presents to in-house legal teams.

Today, as a lawyer focused on AI governance, I often see companies innovating so quickly that governance becomes an afterthought. In-house legal teams are lean, leaving little time to devote to the design and implementation of a formal governance program. Sound familiar? After many conversations with clients and colleagues facing this reality, I've outlined some practical, manageable steps to consider for building an AI governance program, without slowing down innovation.

Step One: Choose an AI Risk Management Framework

UNESCO's “Emerging Regulatory Approaches” cataloged nine approaches that legislators around the world are now using for AI:

  • Principles‑based: Adopt high‑level values (fairness, transparency, human oversight) and make them actionable through policies and guardrails.
  • Standards‑based: Lean on recognized standards (e.g., NIST, ISO, IEEE) to operationalize controls and audits.
  • Agile/experimentalist: Use sandboxes, pilots, and iterative guardrails to ship small, learn fast and adjust controls.
  • Facilitating/enabling: Provide incentives, guidance, and toolkits so teams can use AI safely without friction.
  • Adapt existing laws: Map AI to current regimes (privacy, consumer protection, product safety) and close gaps surgically.
  • Transparency mandates: Require disclosures (AI interaction notices, model documentation, training data summaries).
  • Risk‑based: Calibrate obligations by risk category; the higher the risk, the heavier the controls.
  • Rights‑based: Build around human and fundamental rights impact assessments and remedies.
  • Liability‑focused: Allocate responsibility (and insurance) for model failures, data misuse, and IP issues.

I frequently recommend a principles-based approach, but I have also seen the standards-based approach, with the following two being the most prevalent because they align to information security policies:

  • NIST AI RMF 1.0
    This is a voluntary framework built around the four core functions: govern, map, measure and manage. Personally, I think this helps make it an easy framework to remember.
  • ISO/IEC 42001:2023
    This is the first certifiable AI management system standard. If you already obtain an ISO 27001 certificate, then you may want to consider adding 42001, although it does come at an added cost for the third-party audit. 

Step Two: Make AI Governance a Team Sport

AI adoption is happening through both buying AI tools and building them in-house. Any governance program must be strong enough to account for both paths.

But legal teams cannot build AI governance in a vacuum. Collaboration across legal, information security, privacy, and procurement is critical. I recommend forming an internal AI governance committee that includes cross-functional stakeholders. At a minimum, regular meetings among these core teams are table stakes for any effective AI governance program.

The roles and responsibilities include:

  • Legal: Define the use cases for AI and make sure all policies are written in alignment with the laws and the chosen frameworks.
  • Privacy: Create data maps, with a particular focus on personal data and model training and conduct impact assessments.
  • Security: Review current security controls and assess the impact AI can have on not just creating additional vulnerabilities, but also where AI can be used to enhance security.
  • Procurement: Make sure you have all the paperwork from legal, privacy and security that vendors need to see and adhere to and conduct due diligence on how AI is used. Coordinate AI specific questionnaires and contractual commitments.

By defining roles and responsibilities, you will encourage accountability in your AI governance program and drive awareness throughout the organization.

Step Three: Document the AI Governance Program

Documenting your program is one of the fastest ways to raise awareness and build literacy across the organization. The goal is straightforward: help employees understand your policies and the advantages and disadvantages of AI use.

This is especially important with regulations like the EU AI Act, which will impact even U.S.-based businesses with obligations that phase in over time. Preparing now means defining roles, clarifying responsibilities, and starting with clear documentation.

Next Steps

With the foundation of step one and step two in place, here are four core documents I typically recommend in-house legal teams consider as a starting point for their AI governance program:

  1. The AI Governance Program Overview

This “How We Run AI” document serves as the roadmap for your program. It outlines your chosen framework, purpose and principles (if you've adopted a principles-based approach from step one) and sets the stage for your AI governance committee. It should clearly define roles and responsibilities, AI-specific policies and supporting guidance for your team. While it may not be the most widely read document, it is an important organizational step that anchors your program.

  1. The AI Acceptable Use Policy

This is the “What Employees Can Do” guide and often the first policy most legal teams create when designing an AI governance program. Most companies already have an acceptable use policy, which can easily be expanded to include AI guidance. However, more than a list of do's and don'ts, this policy should be aligned to your framework.

  1. AI Vendor Management

This “How We Buy AI” playbook classifies vendor-related AI risk categories and includes an AI Vendor Code of Conduct that aligns with your AI framework. It communicates expectations to vendors and reserves your right to reassess partnerships if principles or standards are violated. Both new and existing vendors should be included here, especially in light of the rise of shadow AI (where approved vendors launch new AI features without your prior approval). 

  1. The AI Development Lifecycle Policy

This is the “How We Build AI” standard operating procedure. Although not yet widely adopted, it will be required under the EU AI Act for high-risk AI systems and is a beneficial exercise at the minimum. It is similar to a software development lifecycle policy to cover the 4 key stages of product building: design, build, validate and deploy. As an example, at the design stage of an AI feature, principles like non-discrimination may guide how training data is selected. This SOP ensures values and compliance requirements are embedded directly into development.

Final Note

It's important to remember that more documentation doesn't always mean better governance. Legal teams need to decide how much additional compliance their organizations can realistically handle — and it's fine to take a phased, step-by-step approach.

However, I discourage one emerging trend without taking step one laid out above, and that is the use of AI addendums in the procurement process. While these will play a role, they shouldn't be the first step in your governance program because bespoke negotiations may deviate from your company's framework creating confusion.

The rapid pace of AI technology is exciting and your company's participation is important at every level. As in-house attorneys, we are trained to identify the risks, but as business partners we must also be aware of the business context and strategic advantages. A measured approach can be taken without creating bans on AI or too tight restrictions around data usage. The key is a thoughtful, principled approach that allows your business to capture the benefits of AI while safeguarding against the risks that matter most.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More