The Race to AGI: How Close Are AI Models to Achieving Superintelligence?

The conversation around Artificial Intelligence has a peculiar habit of swinging between futuristic fantasy and present-day reality. For decades, the notion of a true Artificial General Intelligence (AGI) — a machine that can think, reason, and create like a human — has been the stuff of science fiction. Yet, here we are, watching AI systems write code, design drugs, and generate disturbingly coherent essays. The big question is no longer if, but when, and more importantly, how we get there. Central to this debate is the concept of recursive self-improvement, the holy grail for AI developers, where an AI could begin to improve itself, kicking off a cycle that could lead to something far beyond human intellect.

What Is This ‘Intelligence Explosion’ We Keep Hearing About?

Let’s be clear about what we’re discussing. AGI isn’t just a more powerful version of the chatbots we use today. It’s an entirely different beast. An AGI would possess the flexible, general-purpose intelligence of a human, capable of learning and adapting across a vast range of tasks, not just the narrow ones it was trained on.
Now, add the idea of recursive self-improvement into the mix. Imagine an AI that is brilliant at writing AI code. Its first task? To rewrite its own code to become even better at writing AI code. This improved version then does the same, and so on. This is the ‘intelligence explosion’ that mathematician Irving John Good first theorised back in 1965. It’s an iterative loop that, in theory, could take an AI from roughly human-level intelligence to something unimaginable, or ‘superintelligence’, in a very short span of time. This isn’t just about an AI getting smarter; it’s about the rate of improvement accelerating exponentially.

See also  Unveiling Tilly: The AI Actress Redefining Entertainment and Ethics

Calibrating the Superintelligence Timeline

So, how close are we? It depends on who you ask. OpenAI CEO Sam Altman has stirred the pot by suggesting superintelligence could arrive ‘in a few thousand days’. This isn’t just bluster; it’s a reflection of the frantic pace of AGI development progress happening behind closed doors. We’re seeing glimpses of this in various research labs.
Think of it like this: the journey to AGI is like learning to build a car. For years, we’ve been hand-crafting each part. Now, we’ve built a robot that can manufacture the screws. The next step is a robot that can design a better screw-making robot. That’s the stage we are entering. As Jack Clark of Anthropic aptly put it, “we are not yet at ‘self-improving AI,’ but we are at the stage of ‘AI that improves bits of the next AI, with increasing autonomy.'” Models from OpenAI, Anthropic, and DeepMind are already demonstrating this in pockets. AlphaGo Zero, for instance, learned to defeat the world’s best Go players by playing against itself, achieving in days what would take a human a lifetime.

From Minutes to Hours: Tracking Real Capability Jumps

The astract idea of self-improvement is one thing, but tangible metrics offer a clearer picture. A recent report cited in Scientific American on the testing of an unreleased model, GPT-5.1-Codex-Max, provides a stark example. In a test designed to measure a model’s ability to sustain complex, multi-step tasks autonomously, the model apparently held its own for two hours and 42 minutes.
For context, its predecessor, GPT-4, fizzled out after just a few minutes on similar tasks. This isn’t just a simple performance boost; it signals a leap across critical capability thresholds. The difference between a few minutes and nearly three hours is the difference between an AI that needs constant hand-holding and one that can chip away at a meaningful problem space independently. This is where changes in model architecture and training data are yielding qualitative, not just quantitative, shifts in ability. While humans still set the goals, the system’s ability to execute them without intervention is growing dramatically.

See also  Building Trust in AI: The Urgency for Transparent Audits

The Human in the Loop: Bottleneck or Guardrail?

This brings us to the central strategic dilemma: the role of human oversight. Currently, even the most advanced AI systems, like Anthropic’s Claude Code (which can code for hours on end), require a human to define the initial objective and, crucially, to judge the quality of the output. The AI doesn’t know if the code it wrote is ‘good’ or ‘useful’ in a real-world sense; it only knows how to optimise for the parameters it was given.
The debate, therefore, is whether true recursive self-improvement can ever happen with a human in the loop. Some argue that humans will always be a necessary guardrail, preventing the AI from optimising for a flawed goal with catastrophic consequences (the classic ‘paperclip maximiser’ problem). Others contend that human intuition and goal-setting are a bottleneck that must eventually be automated for an intelligence explosion to occur. At present, the reliance on humans suggests we are still firmly in control, but the systems are being built to require less and less of our direct input for increasingly complex tasks.

The Hard Problems That Remain

Despite the dizzying progress, true AGI remains stubbornly over the horizon. The primary obstacle is that we are building systems that are brilliant mimics without a deep understanding of the world. An AI can process more text than any human could read in a thousand lifetimes, but it doesn’t comprehend it in the way we do. It lacks common sense, embodied experience, and the ability to transfer knowledge flexibly from one domain to another.
Researchers are still grappling with fundamental questions about what intelligence even is. Is it all about scale—more data, more computing power? Or is there a missing architectural ingredient, a different type of model architecture needed to unlock genuine reasoning and consciousness? The path to AGI development progress is almost certainly not a straight line. It will likely involve hitting frustrating plateaus and discovering entirely new approaches we haven’t even conceived of yet.
The journey towards AGI and potential superintelligence is one of the most consequential stories of our time. The evidence suggests we are past the point of pure speculation and are now in the early stages of engineering the building blocks of self-improving systems. The leap from a model that can work for a few minutes to one that can work for a few hours is a significant milestone on the superintelligence timeline. It proves that sustained, autonomous operation is becoming a reality.
The real question is what happens when we cross the next threshold. When an AI can work not just for hours, but for days or weeks, relentlessly optimising itself? We are not there yet, but the trajectory is clear. The decisions we make now about safety, control, and the role of human oversight will define what that future looks like. What do you think is the biggest barrier we still need to overcome?

See also  FDA Withdraws Key Staff from AI and Food Safety Oversight Amid Policy Changes
(16) Article Page Subscription Form

Sign up for our free daily AI News

By signing up, you  agree to ai-news.tv’s Terms of Use and Privacy Policy.

- Advertisement -spot_img

Latest news

Why India’s AI Market is the Next Big Gamble for Global Tech Titans

When you hear "AI revolution," your mind probably jumps to Silicon Valley, maybe Shenzhen. But what if I told...

Navigating AI: The Church’s Ethical Journey Through Pastoral Challenges in Asia

It seems every industry, from finance to filmmaking, is having its "come to Jesus" moment with artificial intelligence. Well,...

Why Overtone Could Be the Game-Changer for Today’s Disillusioned Daters

Here we go again. Just when you thought the world of tech couldn't get any more personal, it decides...

How Unconventional AI’s $475 Million Investment Could Revolutionize AI Hardware

When a company that barely exists raises nearly half a billion dollars in its first funding round, you have...

Must read

The Silent Threat of Financial Fake News: Why Investors Must Prioritize Detection Now

Forget tanks and fighter jets; the new weapon of...

How Denise Dresser’s Appointment at OpenAI Signals a New Era in AI Monetization

When a company like OpenAI, famous for its world-bending...
- Advertisement -spot_img

You might also likeRELATED

More from this authorEXPLORE

Why India’s AI Market is the Next Big Gamble for Global Tech Titans

When you hear "AI revolution," your mind probably jumps to Silicon...

Navigating AI: The Church’s Ethical Journey Through Pastoral Challenges in Asia

It seems every industry, from finance to filmmaking, is having its...

Why 90% of Businesses Are Boosting AI Budgets in 2026: The Risks and Rewards

It seems every company board on the planet has caught the...

$52.5 Billion in AI: Why India’s Tech Race is More Competitive Than You Think

It seems the global tech giants have found their new favourite...