Yuan Cao

ex-Google DeepMind, into something new

Essays
Modeling the Great Takeoff: Collective Intelligence for the Future of Growth

What we take for granted today - modern life, global wealth, rapid technology, abundance - is a recent departure from thousands of years of stagnation, born from the Great Takeoff merely two centuries ago. If human society once ignited this innovation engine, could AI model this process and help trigger a second takeoff? Is today’s AI mature enough to replicate the core dynamics of creative destruction, and where do the gaps remain? This article argues that beyond making AI smarter as isolated individuals, the missing research frontier is building AI as a collaborative, societal innovation system.

Modeling the Great Takeoff: Collective Intelligence for the Future of Growth
The Missing Axioms: Thinking with Kant in the Age of AI

Modern AI assumes that intelligence scales from data, yet Immanuel Kant showed that meaning requires a priori mental structures that organize experience before learning begins. While AI excels at pattern recognition, it lacks the intrinsic "will to organize" and the ability to spontaneously create abstract concepts like "messiness" across unrelated domains. Until machines can autonomously impose order and structure on experience, they remain powerful imitators, but not sources of intelligence.

The Missing Axioms: Thinking with Kant in the Age of AI
Innovation as the Highest Measure of Intelligence

Human intelligence is ultimately defined not by task performance, but by innovation: the ability to recursively reshape environments, generate new concepts, and expand the space of what can be known and built. Unlike animals and today’s AI systems, humans break cognitive closure by constructing self-amplifying systems of language, knowledge, and institutions. This article argues that innovation is the highest measure of intelligence and the true north star for AGI. Achieving it requires moving beyond static models toward systems capable of open-ended, self-referential world construction.

Innovation as the Highest Measure of Intelligence
Breaking the Closure Limit: Paths Toward Stronger Intelligence Modeling

Scaling laws and new architectures alone cannot overcome a deeper structural limit in today’s AI: representational closure. Drawing on Gödel, Tarski, and cognitive hierarchy, this article argues that intelligence must be modeled as an open, self-extending system rather than a static model. True intelligence emerges through recursive meta-levels that continually revise their own foundations. AGI, in this view, is not a model but a process that never stops breaking its own limits.

Breaking the Closure Limit: Paths Toward Stronger Intelligence Modeling
Similarity, Self-Reference, and the Difficulty of Intelligence Modeling

The difficulty of modeling intelligence is not merely technical, but fundamentally conceptual. It traces the limits of AI to three deep constraints: the fluid, context-sensitive nature of similarity and abstraction, the self-referential character of intelligence, and the impossibility of fully formalizing abstraction within fixed computational systems. Together, these limits suggest that intelligence lies at the boundary of what logic, computation, and science can precisely describe.

Similarity, Self-Reference, and the Difficulty of Intelligence Modeling
The Minimal Conditions for Intelligence: Association, Abstraction, and Analogy as a Unified Cognitive Core

Intelligence, at its core, can be reduced to three fundamental capacities: association, abstraction, and analogy. Together, these abilities allow an agent to detect patterns in experience, generalize beyond specific situations, and transfer knowledge to the unknown. Rather than treating intelligence as a collection of complex, specialized skills, the post presents the “three A’s” as a minimal, unified cognitive core from which higher functions like planning, language, and creativity naturally emerge.

The Minimal Conditions for Intelligence: Association, Abstraction, and Analogy as a Unified Cognitive Core
The Limits of Computability and the Meta-Problem of Artificial Intelligence

Artificial intelligence, as a computational system, is fundamentally constrained by the mathematical limits of computation itself. Drawing on Gödel, Turing, and the theory of uncomputability, it explores why intelligence that can reflect on, transcend, and invent computation may lie beyond what algorithms can achieve. While AI can approximate human intelligence with increasing power, true equivalence may be blocked by a deep meta-level ceiling imposed by logic and mathematics. The piece reframes the AGI debate as not merely an engineering challenge, but a foundational question about the nature of mind and computation.

The Limits of Computability and the Meta-Problem of Artificial Intelligence
Reconsidering Intelligence in the Age of Large Models

Take a step back from today’s AI hype to ask a more fundamental question: do we truly understand what intelligence is? While large language models dominate headlines and investment, this essay argues that many core scientific questions about intelligence, computation, and understanding remain unresolved. Through perspectives from computation, cognition, and philosophy of science, the series explores the limits of current AI, what is missing from today’s models, and what future paths toward deeper intelligence might look like.

Reconsidering Intelligence in the Age of Large Models