ARTICLE
11 July 2025

There Is No "I" In AI: Why Humans Must Manage Matters Of Consequence

A
AlixPartners

Contributor

AlixPartners is a results-driven global consulting firm that specializes in helping businesses successfully address their most complex and critical challenges.
We live with almost daily speculation about AI crossing the threshold of artificial general intelligence (AGI) and replacing the human workforce.
United Kingdom Technology

We live with almost daily speculation about AI crossing the threshold of artificial general intelligence (AGI) and replacing the human workforce. Dario Amodei, CEO of leading AI research lab Anthropic, recently asserted that the technology could eradicate half of all entry-level white-collar jobs in the next five years. This conjecture may help with PR and fundraising, but it also creates anxiety about AI adoption and leaves executives wondering how to plan effectively for the future.

However, from a more empirical perspective, S&P Global recently reported that 42% of companies had abandoned the majority of their AI initiatives entirely before they went live, and MIT Sloan Management Review notes that even though companies are investing heavily in generative AI, many are still seeking returns on that spending. Even research involving Microsoft noted a "tendency for automation to make easy tasks easier and hard tasks harder," which is not a ringing endorsement for the merits of co-intelligence.

So, how do we choose between these competing views of AI's current reality?

Sometimes it helps to zoom out, and Aristotle's vantage point over two millennia before the first computer is far enough away from the AI melee to offer some perspective. Using his ideas, I will suggest that the current claims of AGI and human replacement are premature, lack precision, underestimate human distinctiveness, and ignore history, but still allow for a more narrowly defined and graduated revolution in the workplace.

Aristotle's three intellectual virtues of practical wisdom (phronesis), productive skill (techne), and theoretical discovery (episteme) cover most of what happens in the modern workplace, and I will unpack them in that context.1

I begin with phronesis and AI's almost complete inability to replicate it.

Phronesis

By phronesis, Aristotle means a nuanced form of applied wisdom requiring all the human faculties. Ronald Schleifer describes it as "context-sensitive, ethically grounded decision-making in conditions of uncertainty," which sounds very like leadership. If AGI is defined against this standard, which integrates all aspects of human intelligence rather than just computational and emergent reasoning, then we are not even close.

Even without getting metaphysical about humans, it is evident that AI lacks some of our basic capabilities, such as emotions, instincts, moral capacity, long-term memory (aka experience), consciousness, and embodiment. Perhaps we should not be surprised, then, that neuromorphic supercomputers like the impressive SpiNNaker at the University of Manchester, which are designed to emulate the human brain, currently simulate only 1% of its neurons. Professor Steve Furber, SpiNNaker lead architect, concedes, "We're still a long way from full brain emulation – but we can learn a lot by modelling parts." The gap will narrow, but it is unlikely to close for decades.

So why does the imminence of AGI surface every few weeks? In addition to cynical announcements for corporate purposes, I think it is driven by generative AI's ability to harness human language and knowledge patterns to imitate us in the most extraordinary act of ventriloquism in history. This marvel of mimicry is very convincing and makes us feel like we are interacting with an independent entity. But the "I" projected by generative platforms is what Jacques Derrida called "an illusion of presence" or "a hauntology." It turns out the ghost in the machine is us, an echo of ourselves derived from our training data.2

So, if there is no "I" in AI, there can be no genuine phronesis, and that means technical platforms cannot be allowed to lead or decide anything consequential. As such, humans must be more than "in the loop" to monitor outputs for hallucinations and bias; we need to own, define, and govern the loop for the foreseeable future. This has immediate implications for the rise of multi-agent systems capable of working autonomously. Humans must assert control over these developments to ensure they operate transparently and with manual overrides.

Techne

So, is the AI revolution just hype and trickery? Not once we move into Aristotle's domain of techne, which is the world of productive skills, know-how, and technique. Aristotle's definition covers anything we create, including an idea, product, analysis, artistic performance, or even a cup of tea. Underrated traditional machine learning can already automate many quantitative tasks, while generative AI, still early in its development, is credibly impactful in content creation, coding, and conversation (the three Cs).

But there are limits. Right now, AI automation works best on processes that take less than an hour. That window is expanding, doubling every seven months, but AI cannot yet displace whole jobs, and won't be able to for another two or three years. When that threshold is crossed, organisations will start to fundamentally transform from the top down rather than just using AI for selective bottom-up tasks. Until then, it is best to focus on using AI bottom-up for specific business tasks with a proven track record of economic return.

Another challenge is that developments in robotics are lagging behind those in software (Moravec's paradox), requiring too much energy to be economically or environmentally viable, and unable to replicate generalised fine motor skills. That means many manual tasks will remain predominantly human until some of the big investments underway by companies like Nvidia start to make real advances. That said, we should note that a combination of AI and robotics can already design, build, and drive a car (fairly) safely. This may have taken longer than predicted, but it is a sure sign of things to come.

Even with the increasing automation of workplace skills, AI will not only displace but also create jobs for humans. The internet removed whole industries that were based on information friction, like travel agents, for example, but created millions of web development roles. Those front-end coding skills are looking highly vulnerable to generative AI, but now we have prompt engineering as a completely new profession. The net position is hard to predict, but previous upheavals like the Industrial Revolution and the advent of the internet have created more jobs than they have destroyed. I am hopeful of a repeat.

Episteme

Aristotle's final domain of episteme is a narrow but interesting one. It is about universal theories that can be discovered but not invented; think mathematics, science, and logic. AI has made big breakthroughs here recently, with Google DeepMind announcing AlphaEvolve in May, an "agent designed for general-purpose algorithm discovery and optimisation", and IBM's AI-Hilbert, which has been verifying experimental data against scientific theories and even rediscovering established laws since the middle of last year.

So far, these powerful platforms have not made a fundamental discovery of their own. Nonetheless, I do not see any theoretical reason why generative AI platforms like these could not eventually do so, especially as exponential increases in speed and capacity emerge from the progress finally being made in quantum computing. For now, though, AI is a super-tool to aid but not originate scientific breakthroughs.

Conclusions

So, perhaps, to adapt Mark Twain's often quoted words, "Rumours of our imminent replacement are greatly exaggerated."

We are certainly at the beginning of an AI automation revolution that will assume many work tasks and eventually redefine organisations. This will be gradual but ultimately transformational. Right now, leaders should focus on solving specific business problems using machine learning and generative AI for the three Cs of content, coding, and conversation. Some labour can be saved, but new skills will also be needed. For most companies, being a fast follower is enough, and executives should require evidence of proven benefits elsewhere before they invest.

However, no AI research in sight suggests that we will replace humans as responsible, relational, self-reflective, and embodied leaders who have the benefit of at least 100 millennia of experience in navigating new disruptions. This aspect of work is ours for the long term.

Which brings us back to Aristotle and his preoccupation with eudaimonia, or human flourishing. Since humans remain inimitable and essential to the workplace of the future, we would do well to ensure that all apparent 'progress' is to the unambiguous benefit of our species.

Footnotes

1. Nicomachean Ethics, Book 6.

2. I will cover this idea in a subsequent article about the rise of AI therapy, companions, and eventually humanoid support robots.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More