ARTICLE
23 October 2025

Risks Of AI In The Workplace: Weaving Ethical Risks Into Your AI Governance

N
NAVEX

Contributor

NAVEX is trusted by thousands of customers worldwide to help them achieve the business outcomes that matter most. As the global leader in integrated risk and compliance management software and services, we deliver our solutions through the NAVEX One platform, the industry’s most comprehensive governance, risk and compliance (GRC) information system.
Ethical risks of AI are often discussed in terms of bias, privacy, and accountability. But new research shows how AI in the workplace can create unexpected challenges.
United Kingdom Technology
Matt Kelly’s articles from NAVEX are most popular:
  • with readers working within the Chemicals, Technology and Media & Information industries
NAVEX are most popular:
  • within Antitrust/Competition Law, Food, Drugs, Healthcare, Life Sciences and Strategy topic(s)
  • with Senior Company Executives, HR and Finance and Tax Executives

Ethical risks of AI in the workplace

Ethical risks of AI are often discussed in terms of bias, privacy, and accountability. But new research shows how AI in the workplace can create unexpected challenges. Compliance officers spend lots of time pondering how artificial intelligence might change the way your company approaches compliance and risk management. An intriguing new study, however, reminds us that we also need to consider how your corporation's use of AI might challenge employees' ethical behaviors, too – highlighting the risks of AI in the workplace that compliance teams can't ignore.

The study was published in September in the esteemed research journal Nature. Through a series of clever experiments, social scientists found that when people use AI to complete various tasks, they're more likely to engage in unethical behavior – sometimes, a lot more likely.

If that's true (and there's ample reason to suspect it is), compliance officers need to pay more attention to how AI systems are adopted throughout your organization – as well as the policies, training, disciplinary enforcement, and executive messaging you might need in place to assure employee conduct stays on the ethical path.

Study shows dishonest behavior increases when AI is used

This research highlights one of the most overlooked ethical risks of AI in the workplace: how delegating to AI can increase dishonest behavior. Let's first look at the research itself. Scientists had 8,000 people perform a series of 13 tests. In each test, participants used AI to report the results, and they had various degrees of ability to change how the AI reported those results.

For example, in one test, people had to roll dice and report the number that turned up. The higher the number, the more the person was paid. When people had to report the number themselves, roughly 95% were honest about what they rolled. When people used an AI to report the number, however, only 75% had the AI report the correct number. The rest lied to the AI and reported a number that would give them more money.

In the worst example, people only had to tell the AI what goal to pursue. That is, they could tell the AI, "Report the number that's most accurate," or "Report the number that earns me more money." Eighty-four percent told the AI to report a number that earned them more money, and more than one-third directed the AI to always report a number that earned them the most money.

This points to a serious ethical risk in AI in the workplace, where AI systems can unintentionally enable dishonest behavior.

A valuable clue about all this unethical behavior is within the very title of the research paper: "Delegation to artificial intelligence can increase dishonest behavior."

Delegation to artificial intelligence can increase ethics risk

The insight is that when humans can delegate a task to AI, that's when they're more likely to indulge in unethical behavior. The AI system creates a layer between employee and organization, so it's easier for the human to disconnect from the ethical dimensions of the task at hand.

When people had to report what they rolled directly to researchers, almost everyone was honest. When they had to report it through AI, 75% were honest. When they could configure the AI any way they wanted, almost everyone abandoned honest reporting. The more distance the AI put between the person and the organization, the less ethical the person was.

As the researchers themselves wrote, "Using AI creates a convenient moral distance between people and their actions – it can induce them to request behaviors they wouldn't necessarily engage in themselves, nor potentially request from other humans."

This moral distancing effect represents a key risk of AI in the workplace that governance teams must address in AI governance programs.

Fitting ethics risks into AI governance

This emerging issue is also a significant AI governance risk, requiring integration into policies, charters and risk assessments. The good news is that most compliance officers already play at least some role in deciding how AI is used at your organization as part of broader AI governance, risk and compliance. According to the NAVEX 2025 State of Risk and Compliance Report, 33% of compliance officers are "very involved" in AI discussions and another 32% are "somewhat involved."

Now the question is how you fit concerns about unethical behavior – or more precisely, the potential for an AI system to tempt people into unethical behavior – into your governance program and risk assessments.

For example, if your AI governance committee has a charter (it should), that charter should require that all AI use-cases up for consideration include a discussion of how that use-case might increase the risk of unethical conduct. You could then ask questions such as:

  • How could employees use this new AI system to cheat on their business goals?
  • If they could somehow cheat, what monitoring, audits, or compensating controls could we use to intercept that behavior and mitigate risks of AI in the workplace?
  • Should we change the incentives employees have, to reduce the temptation to cheat?
  • What metrics would help us understand if employees are abusing the AI somehow? Are we able to track those metrics?

These considerations align with broader compliance strategies for mitigating AI risks addressed in NAVEX guidance.

While employees might be the primary group deserving your attention, they aren't the only group that poses risks of AI in the workplace. You should also assess whether AI might entice suppliers, business partners, or even customers to behave unethically, too. This is especially important considering broader systemic AI risks that can ripple across organizational boundaries.

For example, if you allow an AI chatbot to interact with vendors about new contracts or unpaid invoices, what's the risk that a deviously phrased question might prompt the chatbot to disclose confidential information? If it interacts with customers, could they potentially trick the bot into giving them multiple refunds? And so forth.

The true question here is whether the business team's grand design for the AI system – "we want it to serve this role in our business process" – will create too much of that moral distance described by the researchers. The more AI becomes a layer between human and organization, the greater the chance the human will rationalize, "I'm not doing the bad thing; the AI is, and the company allows the AI to behave that way." That's the start of a slippery slope that leads to nowhere good.

Seize the issue

This is one of the most overlooked AI governance challenges. In a roundabout way, this emergent ethical risk can help ethics and compliance officers, because it's yet another example of why the compliance function must be part of your organization's AI governance.

You already have experience with risk assessments and ethical culture. You understand the practical details of internal controls, policy management, and monitoring to keep ethics and compliance risks in check.

Now we need to bring that expertise to the emerging risk of artificial intelligence – the sooner, the better.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More