In the realm of technological advancement, Artificial General Intelligence (AGI) is a term that captivates the imagination, conjuring visions of AI systems possessing human-like cognition or even surpassing it. Think of the AI in the film ‘Her’ with its soothing, human-like persona or the dystopian Skynet from ‘The Terminator.’ However, the reality of AGI is far more nuanced and currently, quite distant.
A growing chorus within and beyond the tech industry suggests that AGI, or ‘human-level AI,’ is approaching. Notable figures like Sam Altman of OpenAI and Dario Amodei of Anthropic predict its arrival within a few years. Meanwhile, others, such as Demis Hassabis from Google DeepMind and Yann LeCun of Meta, estimate a timeline of five to ten years. Journalists and thought leaders are also urging society to prepare.
The Reality of AGI
Despite these bold claims, what’s often labeled as AGI is a hyped narrative, sometimes used to bolster funding and interest in AI ventures. Many experts suggest that this label distracts from the genuine capabilities and limitations of current AI technologies. The concept of AGI, for many, is simply a metaphor for impending disruptive innovations—those that will significantly alter industries, economics, and society.
Instead of focusing on AGI, a clearer understanding of diverse AI technologies and their potential impacts is more beneficial. Powerful AI models like large-language models (LLMs) are already here, advancing fields from coding to creative writing. Yet, they remain narrow in scope. These models are adept at specific tasks but struggle to replicate the flexible and adaptable nature of human intelligence.
What LLMs Can and Cannot Do
Currently, LLMs are impressive in specific areas: generating text, solving defined problems, and performing well in structured tests. Their general-purpose engagement makes them seem like human-like intelligences, but they lack the intricacy of human thought—the ability to handle ambiguous tasks, create groundbreaking theories, or navigate social complexities with the finesse of instinct and intuition.
Moreover, these models are limited by the ‘jagged frontier’ phenomenon, where they excel in some tasks but fail in closely related ones. Despite their prowess in defined, technical challenges, their ability to innovate or perform nuanced human-like work remains limited.
The Future of AI Intelligence
One of the key challenges with the AGI concept is the assumption that intelligence is a linear scale reaching human levels and beyond into ‘superintelligence.’ However, intelligence—rooted in evolution—is specific to our needs, environments, and biology; different from that of other species and potential AI systems.
Kevin Kelly, in his essay, contends that human intelligence is but one exhibit in the vast expo of Earth-based intelligences, merely a glimmer within a universe of possible cognitive forms, human or machine. Expecting a singular AI form to mimic or surpass human intelligence without variation is likely an oversimplification.
Specialized AI: A More Likely Future
As AI continues to advance, it’s probable the future will feature specialized intelligences—more effective and reliable for specific tasks, rather than a clumsy jack-of-all-trades. Embodied AI, systems that interact with the physical world, may emerge, potentially offering fundamentally different insights and capabilities.
Agentic AI, already being explored, represents another leap forward. These are proactive AI systems that execute actions, making them useful in a variety of applications. However, even as AI becomes more capable, its evolution remains unpredictable.
Understanding AI’s trajectory requires recognizing the diversity that lies ahead, rather than expecting a single form of AGI. By focusing on what AI can actually do, rather than how close it comes to human-like abilities, we can better prepare for and harness its transformative possibilities in the coming years.