What Does Sam Altman Mean by Achievable AGI?

2025-01-08

Sam Altman announced in 2025 a bold declaration: OpenAI has discovered a method to create Artificial General Intelligence (AGI). AGI is typically understood as an AI system capable of comprehending, learning, and executing any intellectual task that humans can perform.

Last weekend, Altman reflected in a blog post that the first AI agents might join the workforce this year, marking a pivotal moment in technological history. He outlined OpenAI's journey from obscurity to its claim of imminent AGI creation. However, this timeline appears ambitious given that ChatGPT only celebrated its second birthday recently, yet Altman suggests the paradigm for the next complex reasoning AI model is already evident.

Since then, focus has shifted towards integrating near-human AI into society until it surpasses human capabilities across all fields. But what constitutes AGI versus Artificial Superintelligence (ASI)? Altman’s overview of AGI and his timeline predictions have drawn significant attention from AI researchers and industry veterans.

Defining AGI and ASI

Altman noted, “We now know how to build AGI as traditionally understood. By 2025, we anticipate the first AI agent joining the workforce, significantly impacting company outputs.” However, the definition of AGI remains vague; standards must evolve as AI models grow stronger but not necessarily more capable.

Humayun Sheikh, CEO of Fetch.ai and President of the ASI Alliance, commented, “While these systems pass many traditional AGI benchmarks like the Turing test, it doesn’t mean they possess consciousness. AGI hasn’t reached true awareness, which I believe will take much longer.”

The discrepancy between Altman’s optimism and expert consensus raises questions about the term “AGI.” His assertion about AI agents joining the workforce in 2025 sounds more like advanced automation than genuine AGI.

Altman stated, “Superintelligent tools can greatly accelerate scientific discoveries and innovations beyond our own abilities, further enriching and prospering society.” Yet, not everyone is convinced about the feasibility of AGI or agent integration by 2025.

Charles Wayn, co-founder of Galxe, mentioned, “Current AI models have too many errors and inconsistencies that need addressing. It could be a few years rather than decades before we see AGI-level AI agents.”

Some speculate Altman’s bold predictions serve another purpose. OpenAI’s rapid expenditure requires substantial investment to sustain progress, and promising breakthroughs may help maintain investor interest despite high operational costs.

Others support Altman’s claims. Harrison Seletsky, Director of Business Development at SPACE ID, said, “If Sam Altman says AGI is coming, he likely has data or business insight backing this statement.” If correct, “general intelligence AI agents” might emerge within one to two years.

Notably, Altman hinted that AGI isn’t enough; OpenAI aims for ASI—an advanced state where models surpass human capabilities in all tasks. Altman wrote, “We aim for true superintelligence. We love our current products, but we’re dedicated to a glorious future. With superintelligence, anything is possible.”

Though Altman didn’t specify a timeline for ASI, some predict robots could replace all human jobs by 2116. Altman once suggested ASI was merely “a few thousand days away,” whereas experts from the Forecasting Institute estimate a 50% chance of achieving ASI by 2060.

Meta’s chief AI researcher, Yan Lecun, believes limitations in training techniques and hardware needed for processing vast information keep us far from reaching this milestone. Influential AI researcher and philosopher Eliezer Yudkowsky views this as hype benefiting OpenAI in the short term.

Despite rapid improvements in AI agent quality and versatility, unlike AGI or ASI, agent behavior itself is a reality. Frameworks like Crew AI, Autogen, or LangChain enable creating AI agent systems with various capabilities, including close collaboration with users.

What does this mean for ordinary people? Is it a threat or a boon for everyday workers? Experts aren’t overly concerned. Sheikh observed, “I don’t foresee overnight upheaval. While there might be reductions in human capital, especially for repetitive tasks, these advancements could also tackle more complex repetitive tasks remote-operated vehicles currently struggle with.”

Seletsky concurred, stating, “Agents are more likely to handle repetitive tasks than those requiring decision-making. As long as people utilize creativity and expertise responsibly, humans remain safe. In the near future, decisions won’t be dominated by AI agents because while they can reason and analyze, they lack human creativity.”

In the short term, there seems to be some consensus. Wayn noted,