Modern AI assumes that intelligence scales from data, yet Immanuel Kant showed that meaning requires a priori mental structures that organize experience before learning begins. While AI excels at pattern recognition, it lacks the intrinsic "will to organize" and the ability to spontaneously create abstract concepts like "messiness" across unrelated domains. Until machines can autonomously impose order and structure on experience, they remain powerful imitators, but not sources of intelligence.

One of the most persistent assumptions in contemporary AI is that intelligence can be built by accumulating enough data, representations, and optimization power. Yet this assumption quietly ignores a more fundamental question: what makes experience intelligible in the first place?
Long before the age of machines, this question stood at the center of philosophy. In particular, the 18th-century philosopher Immanuel Kant proposed a rigorous framework which argues that meaning does not passively arise from sensory input. Instead, it emerges because the human mind actively organizes experience through a set of a priori structures - time, space, causation, and evaluative sensibility etc., without which experience would remain an unstructured flux.
This essay explores what Kant’s insight implies for AI, and why the absence of such a priori structure reveals a fundamental limitation of current models: missing of the very "engine" of true intelligence.
The A Priori Mind: Meaning Before Data
Kant’s central claim, as he argued in his “Critique of Pure Reason”, is that experience is possible only because the mind already knows how to organize it.
Sensory inputs alone, like colors, sounds, sensations, do not yet form objects, events, or meanings. They become intelligible only when synthesized by mental structures that order events in time, locate phenomena in space, connect occurrences through causation, evaluate situations through judgment. These mental structures are a priori: they are not learned from experience, but they are the conditions that make experience possible at all. They are built-in "filters" and "categories" that organize raw sensory data into meaningful experience. Crucially, the human mind does not wait to be instructed to make sense of the world - it is always doing so and creating meanings.
A Thought Experiment: Inventing the Concept of “Messy”
Now consider a simple but revealing example. Imagine an AI model that has never encountered the word “messy”, but it is presented with many situations such as:
These scenarios differ radically in surface form. What unifies them is not perceptual and surface-level similarity, but a shared structural and affective pattern: disorder, friction, lack of clear entry points for repair, and a feeling of unpleasantness.
For a human, the abstraction is immediate and visceral. We feel a specific "unpleasantness" and a sense of "functional disorder." We spontaneously synthesize these diverse sensory inputs into a single abstract concept: “Messy”. We invent this concept, and from that point forward, we can apply it generatively to new domains that give us similar feelings.
However, in order to accomplish this, the mind must go through the following steps:
This is not mere clustering on a distribution manifold. It is concept formation, in which case the clustering happens at a much higher abstraction level. Current LLMs can explain "messiness" because they have seen the word in their training data context. But they do not seek to create the concept. They lack the intrinsic intention to regulate information into regularity.
We would also like to emphasize here that “messy” is an abstract concept. Unlike concrete concepts such as car, eagle, or banana, it is not grounded in a stable class of objects. Its meaning spans domains with no shared perceptual substrate.
Concrete concepts can often be learned via statistical clustering over relatively stable distributions. Much of animal intelligence operates at this level, which is sufficient for their survival, pattern recognition, and reactive behavior. But such intelligence rarely extracts cross-domain invariants, forbidding them to go beyond survival and finding high-level of laws from the universe.
Human intelligence, by contrast, is distinguished by its capacity to form abstract concepts that transcend object-level grounding. This capacity underwrites science, law, ethics, and the discovery of universal principles governing nature, often distilled from raw perceptual inputs that appear irrelevant or unrelated at first glance (see also the previous article on the core requirements of intelligence).
Why AI Does Not Do This Autonomously
To be fair, modern AI systems can be guided to generate abstract concepts. With careful prompting, an LLM can compare cases, propose unifying descriptions, or even coin new terms. But something essential is missing: The model does not decide on its own that a new abstraction should exist, or this abstraction matters, or it should be stabilized as a reusable concept. Absent explicit instruction, there is no intrinsic drive to organize experience into meaning. This is not just a computational limitation, but also telelogical.
Current AI systems optimize responses based on external, manually designed objectives, but they don’t have incentives to form concepts. Human cognition, by contrast, is characterized by a persistent, pre-reflective intention: the need to make sense of the world. The human mind is under a constant, restless pressure to make the world intelligible. This "meta-layer" of seeking structure is what allows for the emergence of novel concepts. This layer seeks regularity, enforces unity, and treats meaning as something ought to exist This layer is deeply felt but difficult to formalize (“when you can feel it, you don’t need to describe it”, see our earlier article here).
Now if we attempt to program this "will to organize" into an AI, we will immediately face the infinite recursion problem (previously discussed in the article "Breaking the Closure Limit").If we write a program (P1) to generate an organizational structure, what program (P2) generated the logic for P1?If we formalize the "intuition" behind concept formation, we are merely kicking the can down the road.This suggests that the origin of intelligence is a non-reducible ontology. It is a substrate-dependent quality formed through eons of biological evolution, not a list of codes that can be recursively generated from a vacuum.
Kant and Gödel: Two Sides of the Same Boundary
At this boundary, Kant’s a priori structure of mind converges with the logic of Kurt Gödel. Gödel proved that no sufficiently powerful formal system can fully account for its own truths. Kant argued that no empirical process can fully account for the structures that make experience intelligible. In both cases, the origin, or the foundational layer, cannot be derived from within the system it governs. It is the deepest layer of the universe that emerged spontaneously. These axioms are the foundation upon which all other logic and language are built, but they themselves cannot be constructed by themselves.
Kant’s a priori structures of the mind function analogously as axioms: they are not conclusions, they are not learnable, and they are the prior conditions under which all meanings and intellectual activities of human minds occur.
The Axioms of Human Mind
If intelligence has an origin point, it may not be a line of code, a loss function, or a dataset. It may be a non-reducible ontological layer: one that evolution shaped over millions of years, embedded in biological substrates that are not isomorphic to formal symbol manipulation.
Kant’s a priori structures are not mystical, but they are foundational. They are the silent axioms of the human mind, the conditions under which abstraction, meaning, and understanding arise at all.
Until artificial systems possess not only representations, but the authority to invent and stabilize meaning, they will remain powerful mirrors of intelligence, not its origin.