# Proof Narrative: Current AI systems have already achieved Artificial General Intelligence (AGI).

## Verdict

**Verdict: DISPROVED**

The claim that today's AI systems have crossed the threshold into Artificial General Intelligence is not supported by the evidence — in fact, four independent authoritative sources, each reasoning from a different angle, all reach the same conclusion: AGI has not been achieved.

## What was claimed?

The claim is that AI systems available today have already reached Artificial General Intelligence — the kind of broad, flexible, self-directed thinking that would make a machine genuinely comparable to a human mind across a wide range of tasks. This question matters because it shapes how we regulate AI, how we invest in it, and how seriously we take warnings about its risks. With bold statements from tech leaders making headlines, many people are left wondering whether AI has already crossed some fundamental line.

## What did we find?

The most authoritative framework for answering this question comes from Google DeepMind, whose researchers published a formal taxonomy of AGI levels. By their classification, today's frontier AI models sit at Level 1 — "Emerging" — out of six levels. Reaching even Level 2 ("Competent") would require matching the performance of a skilled adult across most cognitive tasks, and current systems fall well short of that bar.

Gary Marcus, a cognitive scientist and longtime AI researcher, puts it plainly: current AI systems "do not exhibit the flexible, self-directed competence that the original concept of artificial general intelligence was intended to capture." His critique targets a specific confusion — mistaking increasingly sophisticated pattern matching for actual understanding. An AI that can ace a bar exam or write code is not the same as one that can reason flexibly about novel problems the way a person can.

The capability gaps are concrete. Current AI models respond brilliantly to prompts but never form their own goals or decide unprompted what to explore. They show what researchers call "jagged intelligence" — winning math olympiad gold medals while failing elementary problems that any child would handle. DeepMind's cognitive framework identifies five key areas — learning, metacognition, attention, executive function, and social cognition — where today's systems fundamentally underperform compared to general human intelligence.

There is also a physical argument. Tim Dettmers, a University of Washington AI researcher, points out that the economics of scaling AI have shifted: improvements that once required roughly linear investment now demand exponential resources. The trajectory that might have led to AGI is running into hard physical limits.

The search for counterevidence found one notable exception: Nvidia CEO Jensen Huang stated in March 2026 that he believed AGI had been achieved. His claim rested on AI passing human exams — but researchers quickly pointed out this conflates narrow benchmark performance with genuine general intelligence. No major AI lab — not OpenAI, not Google DeepMind, not Anthropic — has endorsed the claim. A survey of nearly 500 AI researchers found that 76% believe scaling current approaches is unlikely to produce AGI.

## What should you keep in mind?

"AGI" has no universally agreed definition, and that ambiguity matters. If you define AGI simply as "AI that passes human tests," some would say it's here. If you require flexible, self-directed reasoning across novel domains without human prompting, the evidence says we're not close. The frameworks used here — DeepMind's levels, OpenAI's internal scale — are among the most widely cited, but they are not the only way to draw the line.

The sources are unanimous, but the reasoning varies in strength. The DeepMind paper is peer-reviewed academic work; the other sources are expert commentary, two of them published on personal blogs. The authors are credible, but the publishing platform matters less than it should when evaluating this question. Expert surveys also carry uncertainty: even optimistic forecasters put only a 25% chance of AGI by 2029.

This proof addresses whether AGI exists now, not whether it ever will. The evidence here says no — it does not settle the harder question of how far away AGI might be or whether today's architectures could reach it.

## How was this verified?

This claim was evaluated by searching for authoritative independent sources that explicitly reject it, then checking whether verified sources exceeded a conservative threshold of three. The full evidence, source quotes, and verification details are in [the structured proof report](proof.md) and [the full verification audit](proof_audit.md). To inspect or reproduce the logic, see [re-run the proof yourself](proof.py).