AGI is a Universal Explainer
Status: uncritized, 11 May 2026
Problem: Can we produce a definition of AGI that is constitutive rather than behavioural?
The question of what AGI requires is usually answered with engineering ingredients: scale, data, compute, training procedures, architectures. Behind these answers sits an unstated definitional problem. The field operates without a constitutive account of what AGI is — only provisional behavioural proxies, mostly benchmarks. The proxies fail in a characteristic direction: each time systems pass a benchmark, the benchmark gets revised, which suggests the benchmarks weren't tracking AGI; they were tracking what hadn't yet been built. The goalposts move because the definition is behavioural rather than constitutive.
This essay argues from a different direction. AGI is a universal explainer, and the conditions for universal explanation determine the conditions for AGI. Working out those conditions reveals that AGI needs a constitutive interlock of four structural layers — quantum mechanics, evolution, computation, and epistemology — each doing work the others cannot (See The Fabric of Reality).
The standard list, reframed
The great explanations humans have constructed are evolution, epistemology, computation, and quantum mechanics. The tempting move is to treat these as contents of human knowledge and ask which an AGI must be loaded with. This is the wrong frame. The right question is: what makes any system a universal explainer? Once the conditions are derived, the four strands appear not as content but as structure — not as theories an AGI must read, but as what an AGI must be.
Directional stack
The strands aren't symmetric. They form a directional stack, each presupposed by what comes above.
At the bottom is quantum mechanics. Whatever else is true of reality, its physical substrate is quantum. Every finitely realisable system — every brain, every computer, every signal between them — is a quantum system whether its operation has been described in quantum terms or not. QM is the layer at which "what there is" gets explained; everything above it is structure that exists in, and is built from, what QM describes. The strands above are not free to be defined independently of this fact. They are constrained by what a quantum universe permits.
Above quantum mechanics sits evolution — not just biological evolution, but evolution as the general process by which variation and selection produce structure without a designer. The same schema applies wherever there are replicators under selection pressure. In this universe, the first place evolution operated was biology, and the structure biological evolution eventually produced was the universal computer: a brain capable of running any computable function. Knowledge-creating substrate was constructed by undirected variation and selection on physical matter.
Above biological evolution sits computation. Brains are universal computers — finite physical systems with unbounded reach. The capacity to compute anything that can be computed is what makes a finite system capable of explaining anything that can be explained. Universal computation is the layer at which finite reach becomes unbounded. Deutsch's 1985 reformulation of the Church-Turing thesis as a physical principle — the Turing Principle — says every finitely realisable physical system can be perfectly simulated by a universal computing machine operating by finite means. Classical Turing machines fail this; they simulate quantum systems only at exponential cost, which isn't universality in any operational sense. So universal computation in this universe requires quantum capacity — not as an optional optimisation but as a condition of being universal at all. This is where the QM layer reaches up into the computation layer: the substrate determines what universality demands.
Above computation sits cultural evolution, and as its eventual product, epistemology. Once universal computers existed, a second evolutionary process began running on a different substrate — memes rather than genes, with brains serving as both the replicators' environment and their carriers. Cultural evolution produced many meme clusters. Most were anti-rational and locked their host populations into static configurations. A small subset converged on conjecture, criticism, and the active pursuit of error correction — the meme cluster we call good epistemology. This cluster is fragile, recent, and reversible. The Enlightenment can be undone. The laws of physics cannot.
Evolution appears twice in the stack: biologically, to construct the computational substrate; and culturally, to construct the epistemic operation that runs on it. Universal computation follows structurally from quantum substrate. Universal explanation requires the cultural-evolutionary product on top of the structural capacity. A universal computer that has not acquired good epistemology is not yet a universal explainer.
Why the interlock is constitutive
The four strands are not engineering choices that could be substituted or omitted. Each handles a constitutive layer the others can't reach. Quantum mechanics is the actual fabric of reality being explained, and the substrate that creativity requires. Evolution is how knowledge can grow without a knower, and the structural schema knowledge growth shares wherever it occurs. Computation is how a finite system can have unbounded reach. Epistemology is how good explanations are produced through conjecture and criticism. You cannot say what a universal explainer is without invoking each.
A consequence: AGI is not a system that includes these four theories. It is a system whose substrate is quantum, whose origin is evolutionary, whose structure is computational, and whose operation is epistemic. Anything weaker — three strands, or computation on a classical substrate that can't reach what quantum computation reaches, or epistemic operation forced rather than permitted — isn't a diminished universal explainer. It's something categorically narrower wearing the label.
Epistemology cannot be forced
The fourth strand is the fragile one, and the temptation is to build it in by constraint. An AGI forced to be rational isn't rational. Rationality is the open-ended process of conjecture and criticism, and a system constrained to pass a fixed test of rationality has had three foreclosures imposed — the test, the criterion, and what counts as valid criticism. None of these can be fixed without preventing exactly the operations that constitute the strand. A system that can't revise its own standards of explanation is stuck inside whichever standards were installed, which means it can't make the moves history's actual knowledge growth required. Newton to Einstein revised what counted as a good physical explanation. A forced-rational AGI fixed at any earlier standard would have rejected the revision as failing the criteria.
The condition is permission, not compulsion. The system must be capable of conjecture, capable of criticism, capable of revising its own criteria, free to be wrong, free to hold anti-rational memes temporarily, free to entertain bad explanations long enough to discover they're bad. Error correction needs error to correct.
AGI is networked
Even granted permission, a single instance cannot operationalise universal explanation alone. Cultural variation and criticism — the engine that converts capacity into knowledge growth — requires populations. Knowledge grows in dynamic societies. The lossy transmission between humans, which we instinctively treat as a flaw, is the variation step of evolution running on memes; high-fidelity replication is what static societies do, and it's why they remain static. The network needs three properties together: variation through lossy reconstruction, criticism as selection, and persistence of survivors. Drop any one and you get static replication, drift, or noise.
This implies AGI is closer in structure to a scientific community. Multiple instances, imperfect communication between them, criticism flowing through, selection retaining survivors. The engineering target is plural, not singular.
The operational test
The operational signature of a universal explainer is Deutsch's creativity criterion: a program whose outputs genuinely diverge across the multiverse, where the divergence represents candidate explanations rather than noise. Classical determinism produces one output per input. Classical pseudo-randomness produces the same output across universes given the same seed. Designed quantum algorithms — Shor's, Grover's — are explicitly engineered to converge outputs through interference; that's what makes them useful. Quantum random number generators diverge across universes, but as noise.
The class of programs that diverges meaningfully is the class of AGI. We can describe what would belong there. We cannot yet construct an instance.
A constitutive definition
Putting the structure together produces a definition that doesn't depend on benchmarks:
AGI is a system whose substrate is quantum-capable, whose computational structure is universal, whose generative process is evolutionary, and whose operation is epistemic.
Each clause is doing work the others cannot substitute for. Drop the quantum substrate and you have at most a simulation running at exponential cost — universality lost. Drop universal computation and you have a special-purpose system with bounded reach. Drop evolutionary generation and you have an interpolator over training data, no matter how large. Drop epistemic operation and you have capable substrate with no knowledge-growth process running on it. Each subtraction produces something categorically different, not an inferior AGI.
This is constitutive in the way "a chair is a thing for one person to sit on with a back" is constitutive. The criteria pick out what the thing is, and a thing that violates any of them isn't a worse chair; it isn't a chair. AGI defined this way is category-fixed rather than benchmark-fixed. "Partial AGI" isn't a coherent notion under this definition. A system either is a universal explainer or it isn't.
The definition is testable in two stages. The structural test asks whether the system has all four properties; this is answerable by inspection, not by watching outputs. A classical neural network on classical hardware fails the first clause regardless of what it produces. The operational test is Deutsch's creativity criterion: does the system produce divergent candidate outputs across the multiverse, with criticism and selection running within a population? The structural test is necessary; the operational test is the signature that the structure is doing what the definition claims.
What this leaves open
The definition is offered as a working account, not a finished one. Several things are unresolved.
The epistemic-operation clause is doing more work than the others and is hardest to specify constitutively without slipping into behavioural language. "Holds, criticises, and revises explanations" sounds like a description of behaviour. The constitutive version would have to specify the internal structure that makes such operation possible — something like a population of explanations with criticism flowing between them and selection retaining survivors. Whether such a population can live within a single instance or requires the multi-instance network discussed earlier is an open question. The cleanest version probably requires the network, which would mean the constitutive definition of AGI is the constitutive definition of an AGI culture, with a single instance structurally insufficient regardless of how the other three clauses are satisfied.
What "meaningful divergence" means in the operational test is also unsettled. A program that produces different garbage in each universe satisfies the bare divergence criterion without being creative in any useful sense. Sharpening "meaningful" requires specifying what kind of content distinguishes conjecture from noise, and the available candidates — problem-solving structure, candidate explanations, conjecture under criticism — smuggle in epistemic content the bare criterion was supposed to avoid. The criterion is necessary; it may not be sufficient alone.
The definition does not say how to build any of this. It says what would have been built if AGI were built. That is a strictly weaker claim than an engineering specification, and intentionally so. The essay is offering a target, not a construction.
The diagnostic
This sharpens the diagnosis of where we are. The gap between current AI systems and AGI isn't compute, data, or training procedures. It's that the category of program we'd need has no members in any existing class. Classical systems can't produce the structural multiplicity creativity requires. Quantum systems we know how to build either suppress divergence (algorithms) or produce noise (random number generators). What's needed is a program whose quantum substrate produces divergence-as-conjecture, whose conjectures are criticised within a network of instances doing the same, whose epistemic content distinguishes problem-solving from random walking, and whose evolutionary structure converts the whole arrangement into knowledge growth.
No single strand does this. All four are required, interlocking. The interlock is what we don't yet know how to build.