Blog
Ben Goertzel commentary on GPT-5 Pro and AGI, stylized GPT-5 hero image.

GPT‑5 Pro is brilliant, but it’s still nowhere near real AGI, says one of the professors who coined the term

Ben Goertzel praises GPT-5 Pro’s engineering but says it still lacks grounding, continual learning, and the internal models of true AGI.

Contents

Is GPT-5 AGI? Ben Goertzel says not yet. GPT-5 Pro is an impressive engineering feat, but it is not true AGI, says Ben Goertzel. He praises the model for formatting research and parsing technical ideas. Yet Goertzel argues the system mainly pattern matches and therefore lacks grounding, continual learning, and an internal model of the world.

GPT-5 Pro delivers striking capabilities. For example, it generates clear prose, helps format research, and can parse technical ideas with speed and polish. Ben Goertzel, a long-time AGI researcher and founder of SingularityNET, calls it a remarkable engineering achievement. Still, he warns that it is not artificial general intelligence.

Goertzel’s main critique

Goertzel’s point is simple. GPT-5 Pro learns statistical patterns from huge datasets rather than from direct, embodied experience. It also does not form persistent beliefs or goals. At deployment, the model’s knowledge remains effectively fixed. In short, it does not continually learn on its own.

Goertzel’s comments were reported in MSN; read the full coverage for direct quotes and context.

Why pattern matching falls short

This distinction matters because human and animal minds do more than match patterns. They build internal models of the world, form expectations, test those expectations, and update based on new experience. They hold memory and pursue goals over time. According to Goertzel, these features are core to open-ended intelligence and are largely absent in current LLMs.

What true AGI would require

Scaling existing models will improve surface-level performance, and in many narrow tasks they will appear smarter. However, Goertzel argues that scale alone will not produce genuine AGI. Instead, he calls for new approaches such as continual learning, grounded multimodal perception, integrated cognitive architectures, and decentralized systems that support evolving internal models.

How GPT-5 Pro fits into the path to AGI

In practice, Goertzel views GPT-5 and similar systems as useful tools rather than finished minds. They can serve as components in future AGI systems, but they do not yet solve the deeper problems of grounding and ongoing learning. Moreover, commercial incentives for sealed, stable deployments could slow research toward open-ended, adaptive intelligences.

Next: summary and research directions

This post links to coverage of Goertzel’s essay. For the full article, read MSN’s coverage of Goertzel’s essay