The difficulty of modeling intelligence is not merely technical, but fundamentally conceptual. It traces the limits of AI to three deep constraints: the fluid, context-sensitive nature of similarity and abstraction, the self-referential character of intelligence, and the impossibility of fully formalizing abstraction within fixed computational systems. Together, these limits suggest that intelligence lies at the boundary of what logic, computation, and science can precisely describe.

Modern AI systems have achieved remarkable feats, yet they remain fundamentally unlike human intelligence. While they excel at pattern recognition, they struggle with analogy, abstraction, fluid reasoning, and conceptual understanding. Why is modeling intelligence so difficult? Why does intelligence resist formalization in a way that physics or chemistry does not?
A large body of work in cognitive science, neuroscience, and theoretical computer science suggests that the challenge is not merely engineering difficulty, it is conceptual. Intelligence inhabits a domain at the boundary of what can be formally specified or computationally modeled. Below, we discuss that the difficulty of intelligence modeling that arises from three intertwined reasons:
Together, these limits suggest that modeling intelligence is not simply a problem of scale, but of principle.
1. Similarity Measure As the Foundation of Intelligence
Many cognitive functions – association, categorization, analogy, imagination – depend on the ability to perceive similarity. When we recall an experience, extend a concept, or draw an analogy, the mind is performing graded comparisons that reminds us of related thoughts similar in some way.
Similarity is the hidden and fundamental metric that enables:
Current AI and machine-learning systems rely almost entirely on explicit similarity metrics, most commonly cosine similarity computed in high-dimensional embedding spaces. These metrics have proven enormously powerful, as evidenced by the success of modern AI technologies. In fact, the entire AI hardware ecosystem – driven by NVIDIA’s GPU dominance is fundamentally optimized for dense matrix multiplications, which are nothing more than large-scale similarity computations. In this sense, today’s AI economy is built on cosine similarity.
For example, humans see the analogy between “time flies like an arrow” not because of any literal shared feature between time and arrows, but because we project a felt structure of directedness and rapidity. Such analogies depend on fluid, context-driven interpretations that no fixed metric fully captures.
Thus, at the very foundation, intelligence depends on a similarity mechanism that is dynamic, adaptive, and conceptually open-ended, not a static numerical similarity function.
2. Intelligence As a Self-Referential System
The unique feature of AI research is that unlike most scientific phenomena, intelligence is both the observer and the observed. To model intelligence scientifically, the mind must construct a framework that describes the mind itself. This creates a self-referential loop similar in spirit to Gödelian incompleteness (a system cannot fully describe itself), Turing's halting problem (limits on predicting self-referential computation), or Russell-like paradoxes in self-description.
We are not merely modeling a system, we are using the system to model itself. This reflexivity introduces inherent instability and incompleteness. This is like trying to see your own eye without a mirror, the system tries to describe itself using tools that arise from itself.
A complete model of intelligence would need to include a model of itself, which indicates it must include another model of itself, and so on. This creates an infinite regress or an incompletable hierarchy. Human minds bypass this by using informal, approximate, felt self-models.
But formal systems cannot do this without hitting logical limits.
3. The Limits in Formalizing Abstraction
If similarity is the engine of intelligence, then any attempt to model intelligence inevitably begins with defining a similarity metric—something that captures how minds associate experiences, recognize analogies, and form abstractions. But the moment we try to formalize similarity, we encounter a deep conceptual limit:
Any explicit similarity metric becomes immediately concrete, fixed, and finite.
That is to say, the moment we encode similarity as any mathematical object, be it a matrix, a function, a rule, a kernel, or a distance metric, it ceases to be truly abstract. It becomes bounded by its representational form chosen, inflexible to new contexts, and static relative to the fluid human notion of resemblance. Human similarity judgments are context-sensitive, goal-dependent, and metaphorically fluid: the mind can change its notion of “what matters” on the fly. A formal metric, however, is frozen by design. The paradox is, to describe similarity precisely is to destroy the abstraction it aims to capture. The act of formalization collapses open-ended conceptual similarity into a rigid, enumerated definition.
This limitation is not merely a practical shortcoming of current AI methods. It reflects deep theoretical constraints, as we discussed above and in previous posts. To reiterate a few fundamental limitations of formal systems:
Therefore, the very act of formalizing similarity collapses abstraction into concreteness. This does not mean formal models are useless, they can be immensely powerful. But it does mean that the essence of human abstraction cannot be fully captured by any static, formal similarity measure. And therefore, no computational model of intelligence, no matter how advanced, can perfectly mirror the open-ended, self-modifying, context-sensitive nature of human similarity and thought. There will always be a gap between any models of intelligence and the real human intelligence.
4. The Boundary of Science and Logic
The discussion above points toward a deeper issue: the boundary of what science and logic can describe. To understand the universe, humans have to describe the world precisely. Therefore, we rely on well-defined symbolic systems, including mathematics, logic, scientific models, that allow us to express the world in precise, well-defined terms. These systems depend on fixed formalizations, stable variables, explicit rules, objective metrics, and consistent procedures.
But human abstraction does not operate this way. It is unbounded, contextual, and self-modifying. Our concepts shift with experience, our categories evolve as we learn, our interpretations adapt to new contexts. Any formal system that tries to encode this open-ended, reflexive flexibility faces intrinsic limits. Once the system becomes expressive enough to model its own evolving semantics, it becomes either inconsistent or incomplete. In other words, formal procedures cannot fully capture processes that are self-referential, context-dependent, and semantically open-ended.
The challenge of modeling intelligence is therefore not an isolated problem. It is a concrete manifestation of a more fundamental limitation: Formal systems cannot exhaustively describe phenomena whose structure continually changes in response to their own representations.
Intelligence sits precisely within this boundary region, where symbol systems meet the abstraction of meaning, and where formal descriptions inevitably fall short.
5. When You Can Feel It, You Don’t Need to Describe It
The human minds are conscious, and conscious minds operate in a mode that precedes and exceeds explicit computation. Consciousness is about feeling and experiencing itself. When you feel it, you don’t have to describe it, and your mind can bypass any computational procedure. We grasp analogies, metaphors, and abstractions through a direct, pre-formal sense of “this is like that”, before we could articulate such comparisons with any formal representation. We experience analogies intuitively, not through an explicit metric.
When you sense the “spirit” of a word like “elevated”, the concept of “number” from observing multiple objects, or the “connection” that unites two disparate concepts, you are relying on a form of similarity judgment that is pre-conceptual, embodied, and not easily expressible in formal terms. This is not mystical, it reflects a structural fact: Conscious experience provides a direct, non-symbolic access to similarity. You simply feel the relation and association before you can articulate it.
By contrast, any formal system, logical, computational, or mathematical, must specify its primitives: the variables, the metrics, the rules, the distances, the representations. Such systems only handle similarity after it has been reduced to a describable form. They require explicit criteria for what counts as “similar,” “analogous,” or “abstract,” and therefore inevitably capture only a constrained subset of the possible relations the mind can intuit. This divergence reveals a fundamental gap:
The similarity that the mind feels is intrinsically richer and more open-ended than any similarity that can be formally defined. The challenge of modeling intelligence can therefore be viewed, at its core, as the challenge of bridging the experiential, pre-conceptual and intuitive sense of similarity with the rigid, formal metrics required by computation.