Introduction
As companies start to explore the use and implementation of AI related tools and platforms, it is unavoidable that companies will also need to start considering the most appropriate manner to regulate the use of AI, regardless of whether such AI tools and platforms are used internally or externally by their customers, partners and/or stakeholders. AI compliance is key for encouraging the growth of AI1. With the rising number of highly publicised stories featuring the use of AI gone wrong, companies and even their employees may be cautious about the use of AI. Without a proper compliance framework, such cautiousness may impede the use of AI in an organisation as everyone becomes more wary of the risks that come along with it, in particular any unknown minefields that they may inadvertently trigger2. Further, the introduction of AI compliance and governance frameworks prepares companies to deal with any incidents arising from the use of AI. Even though, as we explain in more detail under the subsequent section on "How AI compliance may be different from regular compliance frameworks", there are no clear legislative or regulatory frameworks for regulating AI (save to a certain extent, the EU AI Act), it does not necessarily mean that there are absolutely no repercussions in the event that harm arises from the use of AI. Accordingly, having an AI compliance and governance framework can assist in mitigating the losses and/or damages suffered by both companies and any victims who may have suffered harm from the use of AI3.
However, the perennial question will always be: which is the most suitable AI governance and compliance framework for my organisation?4 Is there a structured and formalised way towards regulating the use of AI and is there a single framework to do so, or are there many different frameworks available in the market that a company can consider? This article aims to help companies make sense of how to build their AI compliance framework for regulating the use of AI and determine which approaches may be more suitable for their organisation.
How AI compliance may be different from regular compliance frameworks
Before we start exploring the different approaches, it is more important to understand whether the approach towards AI compliance is different from other compliance frameworks and if so, how they vary.
(1) Ever changing risks and also in a perpetual loop of evolution
AI risks, unlike other types of compliance risks, do not stay stagnant5. As the use of AI and AI
models evolve (especially generative AI which may "learn" as it is being utilised), the types of risks that can surface will change either or both in terms of types and intensity6. When we compare AI risks with trade sanctions, the difference can be quite marked. For trade sanctions, even though the scope of countries, individuals and organisations under sanctions may vary over time, the scope and assessment of risks arising from the sanctions remain largely similar. Unlike for AI, the assessment and risks arising from trade sanctions do not vary over time, even as new entrants are included in the scope, and the criteria in deciding such new entrants remain largely similar.
(2) Touches on multiple aspects and is not limited to only one particular industry, practice and/or engagement
AI risks are not limited to any single particular industry and can surface as long as AI is being utilised or implemented. Unlike for example, pharmacovigilance risks that only surface in pharmaceutical industries, money laundering risks that are prevalent in finance industries or industries with high value transactions and forced labour risks which are more pronounced in manufacturing or mining industries. As a result of the diverse risks that AI brings about, it will also be difficult to identify a single regulatory authority that is responsible for regulating AI, which in turn leads to some countries taking a sectoral approach rather than a generic approach towards regulating AI.
(3) No or immature legal and regulatory landscape in regulating AI
Save for the EU AI Act, there is no singular regulation in other countries when it comes to regulating AI. In fact, most countries are using guidelines and codes to encourage self-regulation of AI rather than imposing hard laws7. Even for countries like China which have AI related legislation, such legislation are more applicable towards a very narrow scope of use cases instead of a broad AI legislation that covers generally all use cases. Such immature legal framework sets AI compliance apart from other compliance risks which have at least decades of case precedents, legal interpretations and regulatory decisions to support the interpretation and implementation of the relevant laws.
All of the above support the case that we cannot simply adopt the approach taken for existing compliance frameworks in rolling out AI compliance and governance. This is a new enterprise risk that warrants a separate set of approach and perspective in establishing the compliance and governance framework.
Principles based v rules based compliance
(1) Conceptual differences
Although the principle based and rule based compliance approaches are not new in the compliance world8, as mentioned above, the implementation of both approaches in regulating AI compliance can be different as we explore further below.
The principles based approach generally means that compliance frameworks are built around principles rather than rules9. Policies developed via the principles based approach generally do not list down all the rules but instead, will inform the employees about the general approaches towards certain risks10.
The rule based approach, on the other hand (as the name suggests), sets out the detailed rules and steps to ensure compliance11. It may or may not provide the rationale for the rules but compliance will require the detailed processes and steps to be followed.
(2) Applicability and suitability of the different approaches
In order for companies to determine whether they should adopt the rule based or the principles based approach, it is important for the companies to first ascertain the level of maturity of the organisation when it comes to the adoption and use of AI. The maturity level of an organisation in the adoption and use of AI can be ascertained through an assessment of the following aspects: (a) technical knowledge of the employees in relation to AI; (b) frequency of use; (c) the extent that AI is embedded in the organisation; (d) the use cases in which the AI are being used for; (e) the existing compliance framework for general enterprise risks; and (f) the governance structures including reporting structures in reviewing risks.
For companies which are assessed to have a low maturity level in relation to the use of AI, a rules based approach will be much more suitable12 as employees in such organisations may not be able to make risks based or value based decisions on the use of AI and will require more guidance to identify, mitigate and navigate the risks that AI systems may pose.
However, on the contrary, for organisations with employees who have high level of maturity level, a principles based approach will be sufficient13 as such employees may be able to appreciate the risks but just need sufficient guidance to help them ascertain the risk appetite of the organisation without being told prescriptively what is required of them.
Depending on the approach, there may be implications on how certain aspects of the governance and compliance framework will have to be established. We explore three main aspects of a typical compliance framework and evaluate how they can differ based on the type of approaches being adopted.
Effects on AI compliance and governance frameworks
(1) Implications on establishment of governance committees
An organisation that adopts a rule based approach will likely have governance committees that exercise oversight from a top down approach14. The governance committee is likely to play a more determining and decision-making role in approving the use of AI in or outside the organisation. The governance committee is also likely to be populated by more individuals who are subject matter experts in the area of AI and such individuals will hold more determinative say in the approach towards AI use cases and the use of AI platforms. There are also likely to be lesser intermediary steering committees and lesser governance layers, with the ultimate forum being the likely decision makers15.
On the contrary, for the principles based approach, the companies are likely to have multiple layers and less reliance on certain individuals as subject matter experts. The ultimate forum which has oversight over the subsequent governance layers is less likely to play a decision-making role in approving AI use cases and platforms; rather, the ultimate forum is likely to act as more of an escalation forum in the event that the forums at the local and regional level cannot agree to the decision on whether to proceed with a particular AI project16. The ultimate forum may also have the right to veto certain projects notwithstanding these have been approved at the local and regional level. The project proposals and compliance policy changes are also more likely to be recommended from the bottom to the top as the mature audience will be able to provide greater feedback.
(2) Implications on policies and SOPs
For policies that are created for the principles-based approach, these are generally more high level with more focus around the rationale for the principles and maybe some examples of how the principles can be crystallised but none of the recommendations or examples are binding. In coming up with the principles, the company may consider their principles and values and refer to them in creating principles for AI compliance17.
On the other hand, rules based policies are generally more detailed and may provide that any breach of the steps set out in the policies or the standard operating procedures can result in disciplinary actions. Such policies are also less likely to distinguish amongst various scenarios which means under different scenarios, even if the risks are different, a similar set of rules may apply.
(3) Implications on the roles and responsibilities of legal and compliance teams
Not surprisingly, a different approach will also render different roles and responsibilities for the legal and compliance teams: in a principles based approach, the legal and compliance teams will likely play more of an interpretative and facilitative role whereas in a rules based approach, the legal and compliance teams may be more involved in a policing role to ensure compliance with the prescriptive rules18.
As a result of such differences, the legal and compliance teams supporting the different approaches will require different skillsets. In a rule based set up, the legal and compliance teams will need to be able to perform appropriate review and audits of the processes to ensure that they are compliant with the rules set by the organisation whereas in a principles based set up, the legal and compliance teams may need to have a better grasp of the technical aspects of the AI platform to be able to translate the legal and compliance requirements into technical specifications and vice versa.
Hybrid
Not unexpectedly, it is unlikely that any organisation falls squarely into one of the above approaches as there may be organisations where the maturity level varies amongst the various functions in the organisation. In such a situation, it is important to adopt a hybrid approach where both the principles based and the rules based approach are selectively rolled out. For example, if the technological teams within an organisation are more matured in AI adoption, the policies that apply to them will likely be more principles based whereas the rest of the organisation may adopt a more rules based approach.
Conclusion
It is important for the company to not rush into implementing an AI compliance and governance framework but instead take time to review the maturity level of its employees and organisation. Such an assessment can be undertaken both internally (if there is experience in this aspect) or externally through the support of lawyers who have such experiences. Just as the term goes "don't use AI for the sake of AI"19, one should not implement a framework that is not sustainable and not applicable to the organisation.
Footnotes
1 Amanda McGrath and Alexandra Jonker, "AI Compliance: What It Is, Why It Matters and How to Get Started." IBM, 17 April 2025, https://www.ibm.com/think/insights/ai-compliance.
2 "Generative AI Risks and How to Manage Them." The Wall Street Journal, 1 May 2025, https://deloitte.wsj.com/cio/generative-ai-risks-and-how-to-manage-them-75580c9a.
3 Tim Mucci and Cole Stryker, "What is AI governance?" IBM, 10 October 2024, https://www.ibm.com/think/topics/ai-governance.
4 Rodrigo Fernández, "How to Implement AI Compliance Frameworks for Generative AI Systems" NeuralTrust, 14 January 2025, https://neuraltrust.ai/blog/ai-compliance-frameworks.
5 Mark Schwartz, "Overseeing AI Risk in a Rapidly Changing Landscape" AWS Cloud Enterprise Strategy Blog, 22 July 2024, https://aws.amazon.com/blogs/enterprise-strategy/overseeing-ai-risk-in-a-rapidly-changing-landscape/.
6 Francesca Rossi, Michael Schoenstein and Stuart Russell, "AI's potential futures: Mitigating risks, harnessing opportunities" OECD.AI, 19 December 2024, https://oecd.ai/en/wonk/ai-potential-futures.
7 Carlos Ignacio Gutierrez and Gary Marchant, "How soft law is used in AI governance" The Brookings Institution, 27 May 2021, https://www.brookings.edu/articles/how-soft-law-is-used-in-ai-governance/.
8 Jonas Schuett, Markus Anderljung, Alexis Carlier, Leonie Koessler and Ben Garfinkel, "From Principles to Rules: A Regulatory Approach for Frontier AI" Centre for the Governance of AI, 10 July 2024, https://www.governance.ai/research-paper/from-principles-to-rules-a-regulatory-approach-for-frontier-ai.
9 Tania Van den Brande, "Rules-based versus principles-based regulation – is there a clear front-runner?" Ofcom, 3 August 2021, https://www.ofcom.org.uk/about-ofcom/what-we-do/rules-versus-principles-based-regulation.
10 Mithun A. Sridharan, "Why Leaders Should Follow Principles-Based AI Governance" Forbes, 17 December 2024, https://www.forbes.com/councils/forbestechcouncil/2024/12/17/why-leaders-should-follow-principles-based-ai-governance/.
11 Ronald JJ Wong, "Can and Should We Rein in AI with Law?" Law Society of Singapore, the Law Gazette, July 2023, https://lawgazette.com.sg/feature/can-and-should-we-rein-in-ai-with-law/.
12 Kristin Burnham, "What's your company's AI maturity level?" MIT Sloan School of Management, 25 February 2025, https://mitsloan.mit.edu/ideas-made-to-matter/whats-your-companys-ai-maturity-level.
13 Tomoko Yokoi and Michael R. Wade, "The leading companies in artificial intelligence may surprise you" IMD, 12 November 2024, https://www.imd.org/ibyimd/competitiveness/the-leading-companies-in-artificial-intelligence-may-surprise-you/ .
14 Geoff Davies, "AI Adoption: How a "Top Down, Bottom Up" Approach Ensures Enterprise Success"
Pivotal Edge.AI, 24 June 2024, https://pivotaledge.ai/blog/ai-adoption-top-down-bottom-up-approach.
15 Alex Singla, Alex Sukharevsky, Lareina Yee, Michael Chui, Bryce Hall, "The state of AI: How organizations are rewiring to capture value" McKinsey, 12 March 2025, https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.
16 Andrew Gamino-Cheong, "The What, How, and Why of AI Governance" Spiceworks, 2 November 2023, https://www.spiceworks.com/tech/artificial-intelligence/guest-article/the-what-how-and-why-of-ai-governance/.
17 Amna Batool, Didar Zowghi and Muneera Bano, "AI governance: a systematic literature review" AI Ethics, https://link.springer.com/article/10.1007/s43681-024-00653-w.
18 Tosin Umukoro, "Making the shift to principles-based compliance programs" Compliance Cosmos, September 2021, https://compliancecosmos.org/making-shift-principles-based-compliance-programs .
19 Emily Clark, "Are you just using AI for AI's sake?" Startups, 10 June 2025, https://startups.co.uk/news/businesses-using-ai-improperly/.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.