ARTICLE
22 December 2025

We Are Hallucinating In An AI Hyperreality

A
AlixPartners

Contributor

AlixPartners is a results-driven global consulting firm that specializes in helping businesses successfully address their most complex and critical challenges.
Perhaps it is just overexposure or end-of-year fatigue, but, to me, the AI debate seems to be increasingly unhinged.
United Kingdom Technology
Rob Hornby’s articles from AlixPartners are most popular:
  • with readers working within the Retail & Leisure industries
AlixPartners are most popular:
  • within Intellectual Property topic(s)

Perhaps it is just overexposure or end-of-year fatigue, but, to me, the AI debate seems to be increasingly unhinged.

I have recently attended several AI forums, where incredible claims were presented as if they were mundane facts.

Examples include the imminent arrival of artificial general intelligence (AGI), the already widespread business use of sophisticated autonomous agents, generative platforms that are driving such prolific productivity gains that many companies have stopped hiring, and viable quantum computing by the end of the decade.

However, 1) I do not see any evidence of this happening on the ground, and 2) there is a lack of corroborating scientific research.

In fact, the AAAI 2025 Presidential Panelfound that 76% of surveyed AI researchers believed the scaling up of large language models (LLMs) was "unlikely" or "very unlikely" to ever achieve AGI; multiple studiesshow that most agentic AI deployments remain confined to pilots and even then with high failure rates; Yale's Budget Labfinds no AI-driven disruption of the overall U.S. labour market so far, despite some nascent signals in specific occupations; and the scientific consensuson quantum computing is 2035-2040.[1]

This gap between claim and evidence initially mystified me, but then I had a lightbulb moment: the public discourse on AI is behaving exactly like a large language model.

Yes, I know this sounds like I need to immediately step away from my computer and go for a walk in the woods – but let me explain.

How LLMs create a representation of 'reality'

In simple terms, LLMs are vast, multi-dimensional spaces where each token – the basic building block of language – occupies its own unique position (if tokens are too confusing, just thinkwordsinstead).

Tokens that frequently appear together in training data also form clusters within this virtual space. After training, the same algorithm tends to select token groups close to those in the prompt to generate a response.

In essence, repeated language patterns form an internal map of reality, which is used to provide responses to prompts. This can be depicted in shorthand as: repetition ➔ representational reality ➔ response.

The challenge is that LLMs have no means of verifying whether their repeated patterns correspond to the real world. AI can just as easily be trained on fantasy data depicting an imagined realm, just as platforms like NovelAI do for entertainment purposes.

However, in most situations, the model is intended to represent the verifiable universe, at least approximately. Outputs that arise from gaps between the internal representation and the external reality it is supposed to describe are calledhallucinations, and most people consider them a significant problem.

If the representation versus reality gap becomes a chasm, we drift into what Jean Baudrillard called " hyperreality" – a world where representations refer only to other representations, not to anything real.

How the AI debate mimics an LLM

Now, back to my theory on the AI debate. If repetition ➔ representational reality ➔ response is valid, then repeated patterns of inflated, inaccurate, and ungrounded claims about AI have created a distorted language map of the entire subject, which in turn leads to the wrong outputs and actions.

Well-informed contributions rooted in empirical data have been overwhelmed by exaggerated pitches, futurist imaginations, magical thinking, media hyperbole, and even science fiction. When hype is dominant and repeated, it soon becomes the common narrative and eventually an illusion of fact.

As a result, we are hallucinating in an AI hyperreality.

This matters because individuals, companies, and governments are making consequential decisions about where to invest, legislate, regulate, and educate based on these distorted narratives.

Retraining our mental models

Just as with an LLM, addressing this problem requires retraining, and I believe the process begins at an individual level.

Critically examining the sources of our AI information is a good first step. Narratives from the AI sector are most prone to self-serving overreach. Mainstream technology journalism is often excellent but sometimes prioritises personalities over analysis, ignores mainstream business, and fails to hold vendors accountable.

Sane alternatives at the optimistic end of the spectrum include Ethan Mollick, Stanford HAI, and the MIT Initiative on the Digital Economy. Specific anti-hype perspectives can be found at AI as Normal Technology, and blogs by independent experts such as Gary Marcusand Rodney Brooks. The Stanford Digital Economy Laband aforementioned Yale Budget Laboffer hard-headed analysis on AI productivity and job impacts.

These sources do not necessarily agree, but they all adhere to a form of evidence-based, rational discussion.

What we can do

Once we have narrowed our own hyperreality gap, business leaders can:

  1. Engage in public discussions seeking evidence, rational argument, and direct experience. This does not need to be any more confrontational than asking "How did you reach that conclusion?"
  2. Scrutinise internal AI proposals on the same evidence-first basis, then collect rigorous data during pilot before authorising any scaled rollout.
  3. Ask vendors and consultants to provide cases and references when pitching. Ask to speak to the CFO of a previous client to check the benefits.
  4. Find a place to have honest dialogue with peers and advisors in which there is no distorting vested interest.
  5. Encourage "constructive internal dissent" in our organisations to make it easier to speak truth to power on AI.

Conclusions

To be clear, I am not an AI sceptic and believe we are at the start of another genuine technology-led revolution. I also know that current AI can drive productivity if applied correctly. All I am advocating for is a public debate that represents a true reflection of progress, not where we imagine or wish ourselves to be.

The opposite of a hallucination is apparently "veridical perception", which is not very catchy, so I will happily settle for a "hinged" AI discussion instead.

That way, repetition ➔ real reality ➔ informed responses and actions.

Footnote

1 Even this estimate requires caution, given that quantum computing has been 10-20 years away for my whole working life. Progress is significant, but the technical challenges are too.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More