CISOs, compliance officers, corporate boards, and other senior executives are quick to worry these days about the risks of artificial intelligence – but fear not! Some of the brightest minds in technology have built a tool to help the rest of us worry about AI risks at scale.
The tool is the AI Risk Repository, an open-source database of risks related to artificial intelligence, compiled by technology gurus at the Massachusetts Institute of Technology. The repository catalogs more than 700 risks that a corporation might encounter when using AI, pulled together from dozens of risk management frameworks.
Corporate risk management teams can sift through the repository, deciding which ones apply to your organization the most and then using those risks as fuel to drive your AI risk management program.
Ethics and compliance leaders can do the same with a more specific focus. You can use the repository to help you articulate the ethical risks that might confront your company as you start using AI, so that you and your management team can devise the right answers to those questions.
Five ethics questions about using AI
I spent an afternoon searching the repository for references to the word "ethical," and found 129 in total. Obviously, each company would need to narrow down those 129 instances into a smaller number of risks more relevant to your own organization – but we could also boil those 129 instances into five more fundamental questions about AI every organization should ponder before you rush headlong into the AI revolution.
Question 1: Is the use case we've identified for AI ethical?
This might be the most fundamental question of them all: just because artificial intelligence might allow your company to do something, should you actually do it? Does that use-case align with the ethical values of your company, your customer base, and the countries where you do business? Or would using AI expose your company to criticism that you've lost your way?
For example, facial-recognition systems could be an ethical use-case, if your organization works in law enforcement trying to identify terrorists. Then again, if you're a retailer simply looking to spot potential shoplifters or return customers as they enter your store, it's probably not.
Question 2: Have we designed our AI systems with ethics and compliance in mind?
Say you want to use AI to decide product recommendations for customers or to decide credit applications for customers. How do you assure that your AI system doesn't consume improper data at the start (say, by using personal data without user consent) or make flawed decisions at the end (such as discriminating against certain minority populations, even by accident)?
Companies can control against those errors, through careful oversight and thoughtful design of internal controls. The question, however, is whether you've identified those potential risks in advance so you can design your AI system to be "ethics and compliance oriented" from the start.
Question 3: Will people use our AI system ethically?
Your AI system does not exist in a vacuum. Real people will use it, such as employees, customers, business partners and others. You'll need to think about how they might use it, and how to guide those interactions so they're consistent with your company's ethical values and its compliance obligations.
Question 4: Could the AI itself learn to behave in unethical or non-compliant ways?
This isn't a far-fetched question. AI systems learn by consuming vast amounts of data created by humans, and humans aren't perfect. For example, in one famous case, an AI system used in hiring learned to discriminate against female software engineers; because most engineers historically have been men, the system started discriminating against graduates of women's colleges.
Teams will need to think about how the AI system might pick up bad habits, and the controls you'd need in place to bring the AI system back onto the right path.
Question 5: What is our liability if the AI causes an ethics or compliance violation?
Despite all the precautions that might arise by answering our previous four questions, ultimately your company might suffer some sort of AI-induced ethics and compliance violation anyway. In that case, has the company's legal team analyzed the potential consequences? Has the finance team modeled potential costs? Has the risk management team investigated potential insurance coverage?
To answer the ethics questions, start with governance
The five ethics questions above cover an enormous range of issues. No compliance and ethics officer could answer them alone, because you don't have all the necessary technical, operational, and legal expertise.
Well, that's the point. Those five ethics questions show how organizations need to take a "whole of enterprise" approach to artificial intelligence, where senior management...
- brings together the right voices; so that
- you can reach a consensus on what your AI risks are; and then
- decide on the appropriate guardrails to put in place.
That's governance, and it provides the crucial foundation for all the policies, procedures, audits, and internal controls that come later.
So as your company moves into the AI world, compliance officers have a unique opportunity to nudge that journey into a more "ethics and compliance aware" direction. Talk with senior management and the board about the need for good governance. Build stronger relationships with other business functions (technology, legal, IT security, finance, and more) who will play a role in your adoption of AI. After all, they're likely to be the ones serving alongside you on whatever AI steering committee your company establishes.
Once that steering committee is up and running, and answering the big questions about AI's ethical risks, you can then move on to more specific risks. As MIT's AI Risk Repository shows, there are plenty of those risks to go around – but if you can't answer the ethics questions first, everything else gets a lot harder later.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.