The Race to AGI: How Close Are AI Models to Achieving Superintelligence?

The conversation around Artificial Intelligence has a peculiar habit of swinging between futuristic fantasy and present-day reality. For decades, the notion of a true Artificial General Intelligence (AGI) — a machine that can think, reason, and create like a human — has been the stuff of science fiction. Yet, here we are, watching AI systems write code, design drugs, and generate disturbingly coherent essays. The big question is no longer if, but when, and more importantly, how we get there. Central to this debate is the concept of recursive self-improvement, the holy grail for AI developers, where an AI could begin to improve itself, kicking off a cycle that could lead to something far beyond human intellect.

What Is This ‘Intelligence Explosion’ We Keep Hearing About?

Let’s be clear about what we’re discussing. AGI isn’t just a more powerful version of the chatbots we use today. It’s an entirely different beast. An AGI would possess the flexible, general-purpose intelligence of a human, capable of learning and adapting across a vast range of tasks, not just the narrow ones it was trained on.
Now, add the idea of recursive self-improvement into the mix. Imagine an AI that is brilliant at writing AI code. Its first task? To rewrite its own code to become even better at writing AI code. This improved version then does the same, and so on. This is the ‘intelligence explosion’ that mathematician Irving John Good first theorised back in 1965. It’s an iterative loop that, in theory, could take an AI from roughly human-level intelligence to something unimaginable, or ‘superintelligence’, in a very short span of time. This isn’t just about an AI getting smarter; it’s about the rate of improvement accelerating exponentially.

See also  The $1 Billion Gamble: AI-Driven Creativity vs. Human Talent

Calibrating the Superintelligence Timeline

So, how close are we? It depends on who you ask. OpenAI CEO Sam Altman has stirred the pot by suggesting superintelligence could arrive ‘in a few thousand days’. This isn’t just bluster; it’s a reflection of the frantic pace of AGI development progress happening behind closed doors. We’re seeing glimpses of this in various research labs.
Think of it like this: the journey to AGI is like learning to build a car. For years, we’ve been hand-crafting each part. Now, we’ve built a robot that can manufacture the screws. The next step is a robot that can design a better screw-making robot. That’s the stage we are entering. As Jack Clark of Anthropic aptly put it, “we are not yet at ‘self-improving AI,’ but we are at the stage of ‘AI that improves bits of the next AI, with increasing autonomy.'” Models from OpenAI, Anthropic, and DeepMind are already demonstrating this in pockets. AlphaGo Zero, for instance, learned to defeat the world’s best Go players by playing against itself, achieving in days what would take a human a lifetime.

From Minutes to Hours: Tracking Real Capability Jumps

The astract idea of self-improvement is one thing, but tangible metrics offer a clearer picture. A recent report cited in Scientific American on the testing of an unreleased model, GPT-5.1-Codex-Max, provides a stark example. In a test designed to measure a model’s ability to sustain complex, multi-step tasks autonomously, the model apparently held its own for two hours and 42 minutes.
For context, its predecessor, GPT-4, fizzled out after just a few minutes on similar tasks. This isn’t just a simple performance boost; it signals a leap across critical capability thresholds. The difference between a few minutes and nearly three hours is the difference between an AI that needs constant hand-holding and one that can chip away at a meaningful problem space independently. This is where changes in model architecture and training data are yielding qualitative, not just quantitative, shifts in ability. While humans still set the goals, the system’s ability to execute them without intervention is growing dramatically.

See also  Diagnostic Breakthrough or Risky Business? Exploring Pediatricians' Resistance to AI

The Human in the Loop: Bottleneck or Guardrail?

This brings us to the central strategic dilemma: the role of human oversight. Currently, even the most advanced AI systems, like Anthropic’s Claude Code (which can code for hours on end), require a human to define the initial objective and, crucially, to judge the quality of the output. The AI doesn’t know if the code it wrote is ‘good’ or ‘useful’ in a real-world sense; it only knows how to optimise for the parameters it was given.
The debate, therefore, is whether true recursive self-improvement can ever happen with a human in the loop. Some argue that humans will always be a necessary guardrail, preventing the AI from optimising for a flawed goal with catastrophic consequences (the classic ‘paperclip maximiser’ problem). Others contend that human intuition and goal-setting are a bottleneck that must eventually be automated for an intelligence explosion to occur. At present, the reliance on humans suggests we are still firmly in control, but the systems are being built to require less and less of our direct input for increasingly complex tasks.

The Hard Problems That Remain

Despite the dizzying progress, true AGI remains stubbornly over the horizon. The primary obstacle is that we are building systems that are brilliant mimics without a deep understanding of the world. An AI can process more text than any human could read in a thousand lifetimes, but it doesn’t comprehend it in the way we do. It lacks common sense, embodied experience, and the ability to transfer knowledge flexibly from one domain to another.
Researchers are still grappling with fundamental questions about what intelligence even is. Is it all about scale—more data, more computing power? Or is there a missing architectural ingredient, a different type of model architecture needed to unlock genuine reasoning and consciousness? The path to AGI development progress is almost certainly not a straight line. It will likely involve hitting frustrating plateaus and discovering entirely new approaches we haven’t even conceived of yet.
The journey towards AGI and potential superintelligence is one of the most consequential stories of our time. The evidence suggests we are past the point of pure speculation and are now in the early stages of engineering the building blocks of self-improving systems. The leap from a model that can work for a few minutes to one that can work for a few hours is a significant milestone on the superintelligence timeline. It proves that sustained, autonomous operation is becoming a reality.
The real question is what happens when we cross the next threshold. When an AI can work not just for hours, but for days or weeks, relentlessly optimising itself? We are not there yet, but the trajectory is clear. The decisions we make now about safety, control, and the role of human oversight will define what that future looks like. What do you think is the biggest barrier we still need to overcome?

See also  Unlocking the $1T AI Cybersecurity Goldmine: What Investors Need to Know
(16) Article Page Subscription Form

Sign up for our free daily AI News

By signing up, you  agree to ai-news.tv’s Terms of Use and Privacy Policy.

- Advertisement -spot_img

Latest news

Is Self-Regulation Killing AI Innovation? The Case Against Ethics Boards

The AI industry's promise of self-governance was always a bit of a convenient fantasy, wasn't it? The idea that...

Unlocking Potential: How Bengal’s AI Education Overhaul Will Shape Tomorrow’s Innovators

For decades, the Indian education system has been compared to a gargantuan ocean liner: immense, powerful, but notoriously difficult...

How Agentic AI is Reshaping Employment: The Hidden Risks We Can’t Ignore

The Silent Shake-Up: Is Your Job Next on AI's Hit List? Let's not dance around the subject. For years, the...

Inside the Trillion-Dollar AI Infrastructure Race: Who Will Dominate the Future?

Forget the talk of algorithms and models for a moment. The real story in artificial intelligence today isn't happening...

Must read

The Future of Warfare: OpenAI’s Ethical Approach to Defense Contracts

Well, the inevitable has finally happened. The world of...
- Advertisement -spot_img

You might also likeRELATED

More from this authorEXPLORE

How Agentic AI is Reshaping Employment: The Hidden Risks We Can’t Ignore

The Silent Shake-Up: Is Your Job Next on AI's Hit List? Let's...

Job Loss, Manipulation, and the AI Apocalypse: What You Need to Know

It seems you can't open a newspaper or scroll through a...

Can AI Avatars Replace Your Family Doctor? The Controversial Modernization Plan

It seems you can't keep a good TV doctor out of...

Zhipu AI Breakthrough: The Secret Behind China’s Rapid Stock Surge in AI

While the behemoths of Chinese tech like Tencent and Alibaba were...