Yuan Cao

Research Scientist, Google DeepMind

Articles
Innovation as the Highest Measure of intelligence

Human intelligence is ultimately defined not by task performance, but by innovation: the ability to recursively reshape environments, generate new concepts, and expand the space of what can be known and built. Unlike animals and today’s AI systems, humans break cognitive closure by constructing self-amplifying systems of language, knowledge, and institutions. This article argues that innovation is the highest measure of intelligence and the true north star for AGI. Achieving it requires moving beyond static models toward systems capable of open-ended, self-referential world construction.

Innovation as the Highest Measure of intelligence
Breaking the Closure Limit: Paths Toward Stronger Intelligence Modeling

Scaling laws and new architectures alone cannot overcome a deeper structural limit in today’s AI: representational closure. Drawing on Gödel, Tarski, and cognitive hierarchy, this article argues that intelligence must be modeled as an open, self-extending system rather than a static model. True intelligence emerges through recursive meta-levels that continually revise their own foundations. AGI, in this view, is not a model but a process that never stops breaking its own limits.

Breaking the Closure Limit: Paths Toward Stronger Intelligence Modeling
Similarity, Self-Reference, and the Difficulty of Intelligence Modeling

The difficulty of modeling intelligence is not merely technical, but fundamentally conceptual. It traces the limits of AI to three deep constraints: the fluid, context-sensitive nature of similarity and abstraction, the self-referential character of intelligence, and the impossibility of fully formalizing abstraction within fixed computational systems. Together, these limits suggest that intelligence lies at the boundary of what logic, computation, and science can precisely describe.

Similarity, Self-Reference, and the Difficulty of Intelligence Modeling
The Minimal Conditions for Intelligence: Association, Abstraction, and Analogy as a Unified Cognitive Core

Intelligence, at its core, can be reduced to three fundamental capacities: association, abstraction, and analogy. Together, these abilities allow an agent to detect patterns in experience, generalize beyond specific situations, and transfer knowledge to the unknown. Rather than treating intelligence as a collection of complex, specialized skills, the post presents the “three A’s” as a minimal, unified cognitive core from which higher functions like planning, language, and creativity naturally emerge.

The Minimal Conditions for Intelligence: Association, Abstraction, and Analogy as a Unified Cognitive Core
The Limits of Computability and the Meta-Problem of Artificial Intelligence

Artificial intelligence, as a computational system, is fundamentally constrained by the mathematical limits of computation itself. Drawing on Gödel, Turing, and the theory of uncomputability, it explores why intelligence that can reflect on, transcend, and invent computation may lie beyond what algorithms can achieve. While AI can approximate human intelligence with increasing power, true equivalence may be blocked by a deep meta-level ceiling imposed by logic and mathematics. The piece reframes the AGI debate as not merely an engineering challenge, but a foundational question about the nature of mind and computation.

The Limits of Computability and the Meta-Problem of Artificial Intelligence
Reconsidering Intelligence in the Age of Large Models

Take a step back from today’s AI hype to ask a more fundamental question: do we truly understand what intelligence is? While large language models dominate headlines and investment, this essay argues that many core scientific questions about intelligence, computation, and understanding remain unresolved. Through perspectives from computation, cognition, and philosophy of science, the series explores the limits of current AI, what is missing from today’s models, and what future paths toward deeper intelligence might look like.

Reconsidering Intelligence in the Age of Large Models