Breaking the Closure Limit: Paths Toward Stronger Intelligence Modeling

November 10, 2025

Scaling laws and new architectures alone cannot overcome a deeper structural limit in today’s AI: representational closure. Drawing on Gödel, Tarski, and cognitive hierarchy, this article argues that intelligence must be modeled as an open, self-extending system rather than a static model. True intelligence emerges through recursive meta-levels that continually revise their own foundations. AGI, in this view, is not a model but a process that never stops breaking its own limits.

Breaking the Closure Limit: Paths Toward Stronger Intelligence Modeling

Today’s AI systems do not merely suffer from insufficient scale, data, or engineering maturity. They face a structural limitation. The story is not “we just need more parameters and data” or “we just need the next architecture”. Overcoming this barrier requires a shift in how we think about intelligence itself: away from models as static artifacts, and toward intelligence as a hierarchical, dynamic, self-transcending process.

The Gödelian Limit: Why Scaling is Not Enough

Modern AI is dominated by large language models (LLMs). Abstractly, a trained LLM can be viewed as a closed inferential system:

  • Primitives: learned representations
  • Rules: attention and transformation dynamics
  • Outputs: sequences drawn from a learned probability space

An LM’s "universe" is fully defined by its weights and the statistical manifold learned from training data. Once training ends, the model’s internal rules no longer evolve. Such a system can still produce impressive generalizations, but only within the manifold implicitly encoded by its weights. It cannot, on its own, introduce new axioms, generate a fundamental "truth", a novel paradigm or scientific breakthrough, that exists outside its predefined distribution without an external "jump” (for example, a new invention or discovery is a valid truth, but could be assigned to close-to-zero probability by today’s LLMs, since it can be very different from the patterns appeared in the training data). This is not a limitation of scale or architecture, it is a limitation of closure.

This is the modern analogue of a Gödelian limitation: what lies beyond the system’s representational closure cannot be generated from within it. In any formal system F powerful enough to perform logic, there exist true statements that F can express but cannot prove. In AI terms: there are novel insights logically consistent with our universe that have zero probability mass in a model's current weights (for example discoveries and inventions in the future completely beyond the reach of current models). The model is trapped in its own Gödelian box, as it cannot "reach" these truths through internal pattern matching alone.

Similarly, Alfred Tarski proved that "truth" in a language cannot be defined within that same language. You always require a richer meta-language to verify the truths of the level below. Therefore, LMs struggle with truly understanding meanings of words, as the current LMs operate within the language of "statistical token distribution." They can predict what is probable, but they cannot step outside their own formalism to justify or validate their outputs semantically.

To move from probability to truth, we need an architecture that mirrors the Tarski Hierarchy: a system in which higher-level structures can observe, evaluate, and revise the behavior of lower-level ones.

Breaking Closure: Recursive Meta-Layer Growth

Therefore, if intelligence is to escape intrinsic closure, it must be equipped with an external reference frame: a structure governed by different rules, capable of reorganizing, evaluating, and modifying the system beneath it. A plausible path toward AGI therefore requires a hierarchical architecture, composed of multiple layers, where each higher layer is strictly stronger than the ones below it - not by scale, but by cognitive capacity. Higher layers must be able to introduce new representational primitives, revise evaluation criteria, and inject new “axioms” into the system’s world model.

Under this view, AGI will not be a single monolithic model like today’s most capable GPT or Gemini. It will be a nested system of increasingly powerful subsystems, echoing Gödel’s insight that limitations within a formal system can only be resolved by embedding it in a stronger one.

This immediately raises a natural question: how deep must this hierarchy go?

Any finite stack of layers, no matter how carefully constructed, remains finite and therefore closed. Blind spots always remain. To overcome them, yet another meta-layer is required. In principle, this process has no terminal point. The idealized notion of AGI therefore corresponds to an infinite recursive hierarchy. 

Such a construction is obviously intractable in practice, but this does not make it irrelevant. In reality, intelligence is always a matter of degree. With sufficiently deep and well-designed meta-layers, a system may surpass human performance across most domains of interest, even if the Platonic ideal of AGI remains asymptotic rather than attainable.

The key implication is that AGI cannot be a standalone fixed and closed model. It must be an open-ended architecture, capable of indefinite self-extension, where each representational layer can itself become the object of further reasoning and revision.

From AI-as-Product to AI-as-Metabolism

If this analysis holds, AGI will not arrive as a static downloadable model or frozen software package. It will not be something that remains unchanged after shipping. Instead, it will be a system that continually maintains, repairs, and upgrades its own cognition, more like metabolism than memory, which continuously breaks out of its own formal constraints by climbing a hierarchy of meta-levels. In biological systems, learning and inference are not separate phases. The substrate itself is continuously updating. Intelligence emerges not from static optimization, but from ongoing self-modification.

While it’s easy to say “we need to add more layers” as discussed above, we need to be careful and specific about what kind of layers need to be added and what they are for. Obviously, layer growth should not be arbitrary since in that case, adding layers would be no different from adding plugins, and intelligence would reduce to just a bag of tools. Instead, a principled expansion must monotonically improve the system’s cognitive degrees of freedom, in a way that the lower layer could not reliably achieve. In particular, each new layer should enhance at least one core cognitive function, as we discussed in the earlier article:

  • Abstraction
    • Capturing higher-order invariants
    • Compressing multiple concepts into a new primitive
    • Inventing new representational vocabularies
  • Analogy
    • Mapping structure across increasingly distant domains
    • Aligning abstractions, not surface patterns
    • Transferring structural constraints, not just similarities
  • Association
    • Discovering novel relational links
    • Reorganizing conceptual neighborhoods
    • Redefining relevance as context shifts

Each additional layer must unlock a form of semantic or representational freedom that was inexpressible in the lower system. Without this, we are merely increasing complexity but not raising intelligence itself. This is also what distinguishes cognitive expansion from today’s agent tool-using paradigm: adding tools like search, retrieval, memory, or code interpreter improves the agent systems’ versatility, but it does not, by itself, help the system climb up on the cognitive ladder and break the closure (unless such additions enable deeper abstraction, broader analogy, or redefines association).

The following figure shows one possible trajectory of meta-layer growth:

Example of meta-layer growth

Here the hierarchy starts from a strong but fixed and finite LM (L0), followed by explicit association (L1), surfacing the associative relations not captured by the LM implicitly. L2 makes analogies according to structural similarity and alignments across different domains, and L3 generates novel abstractions for the new analogical relations delivered from L2. This recursion can keep expanding forever, but the growth direction needs to ensure enhancement in association, analogy and abstraction.

The “Infinite Recursion” of Human Mind

The human mind, our only existing proof that general intelligence is possible, appears to evade explicit infinite layering.

Human minds model the world, model their own modeling process, and redirect attention based on that self-model. This creates what Douglas Hofstadter famously called a strange loop: The observer is both produced by neural activity and simultaneously steering it. The human mind and cognition is intrinsically self-referential, since the observer is a product of the neurons, yet the observer is the one "steering" the focus of those neurons.

A self-referential system is able to represent itself, operate on that representation, and modify the representation while doing so. Therefore, a self-referential system is functionally equivalent, in expressive power, to an unbounded recursive hierarchy. A self-reference mechanism compresses what looks like an external hierarchy into an internal cycle: instead of stacking explicit meta-layers forever, a system can repeatedly take itself as input, revise its own representations, and iterate.

Computational and logical systems, by contrast, lack intrinsic self-reference. To gain expressive power, they must externalize meta-layers explicitly, stacking one level atop another. This is why artificial systems require unbounded expansion, while human cognition achieves similar power through reflexivity. This gap between explicit hierarchy and intrinsic self-reference, may be the deepest divide between current AI systems and the human mind.

Toward an Open-Ended AI System

Scaling laws on size and data will build a bigger library, but a library, regardless of its scale, is not a mind. Similarly, developing novel model architectures beyond Transformers, MOE, or SSM can make it more effective in absorbing statistical patterns from data and more energy-efficient in training and inference. These efforts are expected to further improve benchmark scores and reduce energy consumption, but they alone won’t break the closure and climb higher up the cognitive hierarchy.

The next step-change in AI will likely not come from more parameters, more data or a different model architecture, but from the introduction of dynamic meta-layers, mechanisms that allow a system to observe, verify, and revise its own representational foundations. AGI is not a destination but a limit where the asymptotic result of an open system continually transcends its own constraints. Intelligence, in this view, is closer to metabolism than to memory.

If AGI emerges, it will not arrive as a finite and finished object. It will appear as a self-modifying, self-extending system, one that never stops breaking closure, never stops re-representing itself, and never stops becoming.