Last week, I had the pleasure of speaking on the FT Live Future of AI Summit panel entitled: "The race to artificial general intelligence – How soon will we get to 'superintelligence'?"
Here is a summary of the views I expressed, augmented with some additional detail and subsequent reflections.
The definition of Artificial General Intelligence (AGI) presented by John Thornhill, the FT moderator, was itself generated by AI as "a theoretical concept for a type of artificial intelligence that can think, learn and understand like a human". He suggested this was a "pretty acceptable definition".
However, my central thesis is that AGI is a category error, belies a lack of imagination, and is sometimes a wilful deception. I know these are big claims, so I will attempt to justify them.
I believe that human intelligence and AI in its current form are too different to compare in a meaningful way. Less than 20% of human brain function can be mapped directly to what AI platforms currently do. The rest is focused on managing embodiment, regulating emotions, powering our instincts, engaging in social behaviour, and sustaining consciousness that enables us to reflect and plan in the first person. This holistic human intelligence culminates in our ability to create civilisations that only appear in the most accelerated projections of AI development or science fiction (which often approximate to the same thing).
In addition, outdated referents often obscure the ontological differences between humans and AI. Biology and AI extensively use machine analogies for human intelligence, but Physics abandoned this idea along with the Newtonian universe. Instead, as quantum creatures, we exhibit properties like indeterminacy and holism, which entirely undermine these mechanistic comparisons and highlight the profound differences between humans and technology platforms.
Because of these factors, I assert that AI does not "think, learn, and understand like a human" and that the "G" of AGI is not sufficiently general to cover the scope of human intelligence.
Nonetheless, AGI is embedded in the charters of some of the biggest AI providers, most famously OpenAI, who commit to stopping competitive efforts if this threshold is safely achieved. Also, the guaranteed publicity in making a new prediction on when AGI is likely to be realised is tempting for commercial entities, even though the term is too ill-defined for such an estimate to be meaningful.
When I add a lack of imagination to my critique, this relates to what we could have been pursuing instead of failing to replicate humans. To my mind, AI should be focused on precisely what our species does badly, and the problems society cannot solve. For this approach to be practical, a way of connecting humans and AI is required, and generative platforms are a good start. In complete multi-modal form, humans and AI can form a powerful co-intelligence without displacing one another.
Even if there is a breakthrough in superintelligence or, more realistically, several narrower superintelligences, this synergistic effect is only enhanced. For example, quantum computing (which admittedly has been five years away for a decade) could eventually transform our ability to sift vast quantities of unstructured or high-dimensional data.
However, we need to take care when building AI capabilities that amend their own neural networks and data or autonomously interact with other AI systems. This is partly because technology platforms do not need to engage the human senses to communicate, increasing the risk that we could be removed from the loop. Our disintermediation could even be a side effect of the pursuit of a legitimate goal.
Lastly, my suggestion of a wilful deception is because anthropomorphic projections onto AI platforms get attention and enhance trust. This can sometimes be manipulated through "AI washing" in cynical ways. A simple count of cute robots under the heading of AI makes this point resoundingly.
In closing, Dario Amodei, the CEO of Anthropic, also dislikes the term AGI and proposes "Powerful AI" as an alternative in his excellent article,Machines of Loving Grace. He defines this to include being "smarter than a Nobel Prize winner", possessing "all the 'interfaces' available to a human working virtually", and "not just passively answer[ing] questions" but without "physical embodiment".
This vision does not replicate human intelligence as I have defined it but is a productive approach to synergistic development. So, in keeping with those who can't resist making public predictions, I believe Amodei's powerful AI will replace the idea of AGI in 2025.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.