Without the ability to invent new concepts and develop solutions to unfamiliar problems, an AI system cannot be truly intelligent. Genuine intelligence requires an open-ended system capable of autonomously expanding its representational space to bridge the gap between its internal models and the realities it aims to understand or achieve. Today's AI systems remain largely closed-ended and bounded by their training data. Without a shift toward open-ended intelligence, the pursuit of AGI will remain out of reach.

As we discussed in the previous essay "Breaking the closure limit", while current language models have exhibited breathtaking intelligent capabilities like coding or maths problem solving, they are still operating within a "closed-loop" system. Their universe is bounded by the convex hull of their training data. They can interpolate, recombine, and mimic, but they cannot transcend. To reach true AGI, we must move toward open-ended Intelligence: systems capable of autonomous concept invention and the generation of truly novel solutions to unseen problems.
Many definitions of AGI have been proposed in history, and most of them focus on an AI system's ability to solve a wide range of tasks at human-level performance. But to me, an essential operational definition of AGI is very simple:
Can the system invent novel concepts autonomously?
Concept invention is one of the fundamental engines of civilization. Without novel concepts being created and serving as stepping stones that enables deeper reasoning and understanding about the universe, maths wouldn't have been created, physical wouldn't have existed, parliaments, banks and any other institutions wouldn't be established. Every major intellectual advancements all rest upon the creation of new abstraction that allow humans to reason about the world in deeper ways.
However, let's now consider a simple thought experiment. Imagine that an AI model has observed many objects like three apples, five chairs, hundreds of stars. Yet it has never been taught any concept related to mathematics. Could it autonomously invent the concept of "number" just like the ancient humans did? Or even such a model understands natural numbers, could it invent the concept of "negative numbers" upon encountering equations like 2-6, 5-9? Or consider another case, where the model has observed a finite sequence of numbers 1, 2, 3..., would it be able to derive the concept of "infinity", which certainly cannot be observed directly in any experience?
I don't think the current AI systems are able to do this. These are not tasks of retrieval or pattern completion, but are acts of autonomous representational expansion. Without the ability to organize experience and abstract them into novel symbolic frameworks, AI remains essentially a library of training data rather than an inventor. Human civilization, by contrast, is a tower built from successive layers of abstraction. If AI cannot add layers to that tower, it will always be an imitator operating within a closed universe.
To move toward open-ended intelligence, we must understand the cognitive mechanics behind concept invention itself.
How are inventions of new concepts, and by extension novel solutions, possible at all, and what are the mental driving forces behind innovation?
To approach this question, consider a hypothetical mind encountering the world without any concept of "number" yet. At first, you observed a sequence of objects like a few apples, some stones, many birds etc. You perceived distinct objects arranged in space, but something is missing: you don't feel comfortable when you realized that there is nothing in your mind that captures the property of "how many objects are present" shared across these situations. Each perception is stored as a separate episode, and the mind lacks a unifying abstraction that compresses these experiences into a common structure. This absence causes a form of cognitive tension: You feel an urge to organize your observations under a unified concept, and this triggers the birth of a new concept of "number".
From this perspective, one may see that concept invention arises from a few primitive cognitive engines. These a priori structures lay the foundation for cognitive activities (see also my previous essay "Thinking with Kant"):
1. Innovation as Gap-Reduction
Human cognition constantly attempts to maintain a coherent internal model of the world. When perception cannot be efficiently summarized by existing concepts, a representational gap emerges. The desire to "organize" and regularize perceptions is the driving force of inventing new concepts and solutions. Before the concept of number exists, scenes involving multiple objects lack a unifying representation of quantities. The mind feels disorganized: it repeatedly encounters structurally similar situations, but cannot unify them with a single compressed description. Concept invention, from this perspective, is not driven by the pursuit of novelty, but by the drive towards better organization and understanding of the world. The mind reorganizes itself by generating new concepts when the existing concept reservoir fails to predict or summarize world states.
The invention of probability emerged from ubiquitous randomness that deterministic explanation cannot resolve. Negative numbers emerged when subtraction produced results that natural number systems could not accommodate. Infinity arose when finite arithmetic proved insufficient for describing unbounded growth... All the concept invention cases reflect the same pattern: Existing representational structures fail to organize and interpret experience, a conceptual gap emerges, then a new abstraction is constructed to restore mental order.
Concept invention is therefore a process of gap-reduction between the current representational state, and a more desirable organized state.
2. Abstraction as Invariant Extraction
But how does the mind actually construct a new concept? In order to reorganize unconsolidated perceptions, the human mind must search for invariants - features that remain stable across changing contexts. Without this drive, experience would remain an unstructured stream of sensory events. With it, the mind compresses diversity into structure where meaning emerges. The invention of a concept like "number” is not an arbitrary act of labeling, it is the stabilization of an invariant across perceptual variability. Once this invariant is detected, the mind anchors it symbolically - with a word, a gesture, or a mark. The symbol becomes a handle, allowing the abstraction to be manipulated independently of any particular objects. At that moment, "number" becomes a cognitive primitive.
Abstraction is the search for invariants and compression of perceptions. Human intelligence appears to possess a deep intrinsic drive to search for such invariants. Without this drive, the world would appear as little more than a stream of sensory noise.
3. Association for Imaginative Expansion
Once a concept is formed and stabilized, it becomes a stepping stone for further conceptual expansion. For example, extending from the positive number concept, one can imagine subtraction like 3-5 yielding negative results. From counting finite collections, one can imagine unbounded extension toward infinity. These novel concepts are no longer derived from perception. One will never see someone taking away five apples from a basket containing three, or staring at a sky with infinitely many stars. Instead they are constructed via imaginative association, a process that allows concept expansion in internal virtual spaces.
In such cases, the mind takes its current representational state and explores adjacent possibilities of "what if": What if subtraction does not stop at zero? What if magnitude can be unbounded? The associative procedure is never directionless, and these imaginative moves are not arbitrary jumps. They are guided by human minds' intrinsic intuition seeking for structure, consistency, symmetries and other forms of regularity across related concepts (for example, "negation" is a symmetrical thought of positivity, infinity emerges from the extension along the counting axis). Association and imaginations are critical driving forces in expanding the conceptual space, achieving novel yet reasonable concepts that compress virtual experiences.
From this perspective, creation can be understood as reasoning from imagination.
If we examine this process further, we will naturally reach a meta-question: What enabled this entire process at all? How is it possible that the human mind autonomously runs through this process which enables creation and innovation?
Beneath abstraction and association lie even deeper cognitive priors, what we might call cognitive axioms. Human cognition appears structured by several primitive and foundational tendencies:
1. Organization in space, time, and causality
The human mind instinctively organizes perceptions along the axis of space, time, and causality. Without the raw feelings about objects, time lapse, and sequential consequences, there would be no starting point to making sense of the world and concepts emerge.
2. Sensitivity to prediction errors and a desire of resolving discrepancies
When expectations fail, or current mental representation space is not sufficient to organize observations or achieve desired goals, the mind feels discomforted and will be motivated to trigger the process of novelty creation. When no rearrangement of existing representations suffices, a new representational primitive must be introduced.
3. Persistent drive toward order: Symmetry, regularity, simplicity, consistency
These tendencies guide the "search direction" towards concept forming and solution establishment. Without these aesthetic guidelines that regulate the search process, the human mind would have to search in a huge space and may never reach plausible novel concepts and solutions that resolve prediction discrepancies.
These cognitive axioms do not determine specific concepts, rather they shape, guide the evolution of mental states, and define the direction of representational evolution. They are the starting point that makes all human intelligence a possibility. Depending on context, different drives dominate:
- In science, the gap lies between observation and explanation.
- In engineering, the gap lies between current state and desired goal state.
- In concept invention, the gap lies between perception and available abstractions.
But across domains, the underlying mechanism is unified: the mind reorganizes itself to reduce structural discrepancy.
Modern AI systems are optimized primarily for task performance. Training objectives like next-token prediction, supervised learning, reward maximization, benchmark optimization all aim to improve accuracy on a specific set of predefined tasks. These objectives refine performance within a fixed representational space, but they do not incentivize the creation of new conceptual primitives.
Today's systems lack several foundational layers that appear central to human innovation and creation, as discussed above. For example, the intrinsic anomaly detection as a generative drive, meta-objectives for representational reorganization, structural priors to guide and regulate abstraction and imagination, symbolic stabilization of newly discovered invariants, and mechanisms for expanding the hypothesis space itself. When an LLM encounters situations far outside its training distribution - especially in innovation domains - the best it could do is to interpolate or hallucinate. But there is no mechanism for it to reorganize and expand its internal universe to close the gap between new observations and existing representations.
If the cognitive picture described above is even approximately correct, then current AI systems remain far from genuine open-ended intelligence, and the goal of AGI is still far from complete. However, to make real progress, we have to translate the cognitive fundamentals to computational frameworks for open-ended AI, which we will discuss in the next essay. But one thing we know for sure: open-ended intelligence modeling will be much harder than that for close-ended.