top of page

AGI Isn’t Coming From Scaling LLMs—Here’s What Comes Next

The Blunt Truth: LLMs Are Still Narrow AI


Let’s cut straight to it—general purpose Large Language Models (LLMs), no matter how advanced they seem, are still narrow AI. They’re sophisticated pattern recognizers, probabilistic next-word predictors, and reasoning engines stitched together by training on mountains of data. But they are not general intelligence, and scaling them further will not magically turn them into Artificial General Intelligence (AGI).


AGI vs Scaling Laws
AGI vs Scaling Laws

The hype around AGI often leans on a single assumption: that with enough data and compute, these models will eventually “think” like humans. This belief is seductive but flawed. Intelligence isn’t just data absorption or prediction—it’s rooted in abstraction, intentionality, creativity, adaptability, and grounded experience in the physical world. Transformers, the backbone of LLMs, are brilliant mathematical architectures, but they weren’t designed to mimic cognition in its full depth.


Hard Limits That Can’t Be Ignored

Here are the walls we’re already hitting:


  1. Finite Energy and Compute – Training today’s frontier models already consumes astronomical amounts of energy. Scaling this approach indefinitely is not just unsustainable—it’s impossible in a finite-resource world.

  2. Scaling Laws and Diminishing Returns – Yes, performance grows with more parameters and more data, but every new leap is yielding smaller and smaller returns. We are squeezing the last drops out of an already stretched paradigm.

  3. The Data Problem – We’ve mined most of the high-quality internet text. What’s left is either noisy, repetitive, or too niche to move the needle. Enter synthetic data—but whether models trained on their own outputs will maintain, let alone improve, performance is still an open (and concerning) question.

  4. Architectural Constraints – Transformers excel at pattern matching, but they lack grounding in reality. They don’t “understand” the world—they model linguistic correlations. Without perception, embodiment, or integration across diverse modalities of human experience, they remain trapped in the domain of text prediction.


The Failing Notion of “Data Will Save Us”

It has become almost dogma: “Just feed the models more data, and they’ll eventually become intelligent.” But this line of thinking collapses under scrutiny. Human intelligence isn’t built on infinite exposure—it’s built on selective abstraction, reasoning about causality, and an ability to generalize from extremely limited data. Ironically, LLMs need billions of tokens to learn what a child picks up in a single experience.


This is why, no matter how much we scale LLMs, they will remain specialists dressed up as generalists. They look smart because they are trained on everything, but they are still narrow because their capabilities are bound by the limits of their architecture.


Why This Isn’t Bad News

If you’re reading this and thinking “so AGI is dead,” hold on. This is not a doomsday message—it’s a reality check. The pursuit of AGI (or even Artificial Superintelligence, ASI) is far from over. It just won’t come from scaling transformers in their current form.

The good news is that this forces us to broaden our horizons. New techniques beyond transformers will emerge—hybrid models, interdisciplinary approaches combining neuroscience, symbolic reasoning, embodied AI, and perhaps breakthroughs we can’t yet imagine. A deeper understanding of the human mind, cognition, and consciousness will likely be essential to building anything resembling AGI.


And maybe that’s the point. AGI isn’t something we should stumble into recklessly through brute force. The pause gives us time to prepare—to build ethical frameworks, energy-efficient technologies, and governance structures before we ever cross that threshold.


The Road Ahead: Caution and Optimism

So here’s where I land:


  • LLMs are revolutionary, but they are not AGI.

  • Scaling transformers alone won’t get us there.

  • True AGI will require entirely new paradigms.

  • And that’s good news, because we need time to get this right.


AGI—if it comes—shouldn’t be an accident of overtraining. It should be the result of careful, deliberate, interdisciplinary effort with guardrails in place. The world isn’t ready for machines with human-level intelligence, and maybe it’s better that we aren’t there yet.


Let’s celebrate the brilliance of LLMs for what they are—powerful narrow AI tools that already transform our daily lives—while keeping our eyes open and our hands steady for what’s next.

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page