ARTICLE
13 August 2025

AI Governance Series, Part 4(A): Beyond Governance Theater — Building AI Controls That Function Under Pressure

JW
Jones Walker

Contributor

At Jones Walker, we look beyond today’s challenges and focus on the opportunities of the future. Since our founding in May 1937 by Joseph Merrick Jones, Sr., and Tulane Law School graduates William B. Dreux and A.J. Waechter, we have consistently asked ourselves a simple question: What can we do to help our clients succeed, today and tomorrow?
Mere weeks after Grok referred to itself as "MechaHitler" the Pentagon licensed it. Presumably, DOD's confidence in xAI's system didn't stem from inherent AI...
United States Technology

This is the part A of the fourth and final installment in our AI Governance series.

Mere weeks after Grok referred to itself as "MechaHitler" the Pentagon licensed it. Presumably, DOD's confidence in xAI's system didn't stem from inherent AI safety; it came from deployment strategy and robust governance frameworks capable of handling systems that are necessarily imperfect, while preserving institutional decision-making authority.

Their decision illustrates the reality facing every organization deploying AI: perfect AI systems do not exist, but effective governance frameworks can make imperfect AI systems workable while maintaining organizational autonomy. The question is not whether to deploy AI, but how to deploy it effectively and responsibly while maintaining competitive agility and organizational independence.

Over the past three weeks, we have examined AI governance failures, mapped risk landscapes, and built practical governance frameworks. This week, we address implementation challenges in a two-part finalie to our AI Governance Series: deploying governance in environments where business pressure, technical complexity, and regulatory uncertainty can undermine even well-designed frameworks.

The Build vs. Buy vs. Partner Decision

Most AI governance discussions focus on technical controls while ignoring the fundamental deployment strategy question and its implications for organizational autonomy. The choice between building, buying, or partnering for AI capabilities has profound governance implications often invisible until problems emerge — particularly regarding how much decision-making authority organizations retain versus transfer to external systems.

Building In-House: Maximum Control, Maximum Responsibility

In-house development offers greatest governance control — you design safeguards, implement monitoring, and own the entire decision-making chain. When systems behave unexpectedly, you have full access to training data, model architecture, and deployment parameters needed for effective response. Perhaps most importantly, you maintain direct control over how AI influences your organization's decision-making processes.

Control, however, means responsibility. When you build AI systems, you own every failure mode, bias, and unexpected behavior. The deeper risk is what might be called "builder's bias" — the tendency to trust systems you create more than you should. For instance, a bank that developed its own loan approval AI might overlook discriminatory patterns because "we built it right."

Key Success Factors:

  • Red team exercises with independent adversarial testing.
  • Diverse ethical review boards with decision-making authority.
  • Continuous behavioral monitoring with human-in-the-loop oversight requirements.
  • Regular governance framework updates based on system evolution.
  • Preservation of human expertise alongside AI development.

Buying Solutions: Vendor Expertise with Dependency Risks

Commercial AI solutions offer proven technology backed by vendors with deep expertise. Major platforms typically invest more in safety research than individual organizations could afford while spreading development costs across multiple customers.

The primary governance challenge is dependency on vendor decisions. When vendors update models or change policies, your governance framework must adapt, but more importantly, your organization's decision-making capacity becomes dependent on external systems you do not control. The Grok incident illustrates this perfectly — organizations using xAI technology had no advance warning of prompt changes causing problematic behavior.

This creates a subtle but serious risk: organizations may gradually lose the institutional knowledge and human expertise necessary to operate independently of vendor AI systems, making them increasingly vulnerable to vendor decisions or system failures.

Key Success Factors:

  • Comprehensive vendor governance audits examining safety practices.
  • Clear contractual exit strategies with data portability provisions.
  • Explicit liability allocation for AI-related incidents.
  • Regular vendor governance assessments with industry benchmarking.
  • Backup systems for operational continuity that don't depend on AI.
  • Preservation of human capabilities for vendor-independent operation.

Strategic Partnerships: Shared Expertise, Shared Complexity

Strategic partnerships combine vendor expertise with greater collaboration than standard commercial relationships. Partners often provide access to advanced capabilities while allowing input into governance approaches and maintaining some organizational decision-making autonomy.

Partnership governance requires alignment between organizations with different risk tolerances and ethical frameworks. When incidents occur, determining responsibility and coordinating response becomes complex — particularly when it's unclear which organization's judgment should prevail in ambiguous situations.

Key Success Factors:

  • Clear governance boundary definitions with decision-making authority specifications.
  • Aligned ethical frameworks with dispute resolution mechanisms.
  • Joint incident response procedures with defined roles.
  • Regular governance alignment reviews addressing evolving requirements.
  • Shared accountability metrics incentivizing collaboration.
  • Preservation of independent decision-making capacity by each partner.

The Hybrid Reality

Most organizations use all three approaches simultaneously: building for competitive advantages, buying for common functions, and partnering for specialized capabilities. This creates governance complexity requiring different oversight mechanisms for each deployment model while maintaining consistent organizational standards.

Given this reality, the most resilient governance frameworks are those that are modular and flexible enough to apply different controls to different deployment models while maintaining consistent oversight and preserving institutional capacity for independent judgment across all AI implementations.

The key insight is that regardless of deployment strategy, organizations must consciously preserve their capacity to think, analyze, and decide without AI mediation.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More