What is the difference between 'high' and 'systemic' AI risk?
As artificial intelligence keeps marching into the corporate world, compliance and risk management teams must start grappling with its many risks. That means compliance officers have an important question they need to answer right away.
Exactly what qualifies as one of those "systemic" or "high" risks from AI, which you're supposed to worry about the most?
For example, in the European Union's recently released Code of Practice for General-Purpose AI Models, the chapter on safety and security (one of three chapters) mentions "systemic risk" 295 times across 40 pages – including seven instances in the first paragraph alone. And that's not to be confused with Article 6 of the EU AI Act itself, which specifies a larger class of "high-risk AI systems" need special attention.
Or consider the Colorado AI Act, enacted last year. It targets "high risk" AI systems; any company that develops or uses such systems must perform extensive testing of the system and make more fulsome disclosure about what the system does. Utah's new AI law requires much the same. Other states are sure to follow.
So, what do those phrases mean in concrete terms, at actual businesses developing actual AI-driven products?
Compliance officers need a clear sense of that answer. Otherwise you'll never be able to take a risk-based approach to guiding AI adoption at your enterprise, and be flirting with non-compliance of AI laws to boot.
Defining systemic and high risks
The EU AI Act and other AI laws do provide definitions of systemic and high AI risks, although those definitions are couched in abstract terms that aren't always helpful to compliance and audit teams facing real AI use-cases.
For example, the EU AI Act says systemic risks are "risks of large-scale harm" such as major accidents, disruptions to critical infrastructure or public health, "reasonably foreseeable negative effects on democratic processes, public and economic security," or the dissemination of illegal, false, or discriminatory material.
The EU AI Act also says systemic risks mostly arise only from advanced, general purpose AI systems, which right now are large language models such as ChatGPT, Claude, Perplexity, and the like. But, the law warns, "some models below the threshold reflecting the state of the art may also pose systemic risks, for example through reach, scalability, or scaffolding."
Meanwhile, the EU AI Act has a separate definition for high risk AI systems, based on the uses of the AI. For example, any AI system working on biometric identification, critical infrastructure, education, employment, or law enforcement could qualify as high risk, depending on its exact purpose. (The details are spelled out in an annex to the law if you'd like to know more.)
So, the EU AI Act defines multiple types of AI risk – some based on the potential damage your AI system could cause (systemic), others based on the need your AI system is trying to fulfill (high).
Among U.S. states, a good example is the Colorado AI Act, which goes into effect in early 2026. Based on consumer protection law, it defines "high risk" AI systems as any artificial intelligence system that can make a "consequential decision" about:
- Education or employment opportunities (such as AI deciding whether you get a job or admitted to a certain school)
- Financial services (for example, whether you can open a bank account, or what the interest rate on your loan will be)
- Access to healthcare, legal, or government services
- Related consumer services
Developers of high-risk systems must make extensive disclosures to their customers (presumably other businesses) about the testing they've done to prevent algorithmic discrimination, as well as disclosures about how the systems should be operated, the intended outputs, data governance and more.
Deployers of high-risk systems (companies using them to interact with consumers) must notify consumers when they're interacting with high-risk AI systems, and explain how they can challenge an AI system's adverse decision (say, against hiring you for a job).
From definitions to AI governance and compliance
Those are the definitions. The challenge for compliance, audit, and risk management teams is to develop a system of governance that can help you compare your company's specific AI plans against those definitions, so you can then apply the appropriate policies, procedures and controls.
Every organization will need to develop its own governance structure, of course; but we can identify a few basic steps all governance structures will need to take.
First, bring the right people into the conversation. Any AI governance committee you establish (or any compliance committee you already have that now addresses AI concerns too) will always need several voices: the technology team, the cybersecurity team, legal, compliance, and usually finance and HR. Then consider who else should be part of that committee, based on which parts of the enterprise might be experimenting with AI. That could include the sales or marketing teams, product development, or perhaps other voices.
Don't forget to consider rank-and-file employees, who might be using ChatGPT or other publicly available AI systems to "do their real jobs." You might be so focused on specific, internal use cases that you overlook those under-the-radar risks.
Second, sharpen your risk assessment process to include systemic and high-risk categories. For example, systemic risks are those risks that could cause damage to the wider world if the AI system goes wrong – so how would you quantify that? What AI risk repositories would you use? (MIT maintains a comprehensive repository, all open-source and freely available.) What policies would you want to prevent AI adoption before risk assessment? What internal tools would you need to document your assessments?
Third, have a process to decide which controls are necessary and then to implement them. You might have high-risk AI systems under the EU AI Act that require more extensive testing; you'll need a process to perform those tests and document the results. You might have high-risk systems under the Colorado law (or other state laws coming soon) that require more disclosures and "right of appeal" processes; you'll need to assure those steps happen too.
Various risk management frameworks can help with those tasks, or you can build your own. The bigger issue is that you'll need a system that guides audit, risk, and compliance teams through those framework steps. So, what tools might you want to help with gap analysis, remediation, and alerting?
These are the risk management challenges compliance teams will face as AI continues to gain steam. The sooner you can tackle them, the better.
The EU AI Act is the first of what will soon be many regulations governing the use of AI. Whether the AI Act applies to your organization now or not, there is plenty of useful information to prepare for the future of AI regulation. Learn more in the link below.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.