AI and the Static Society: What AI Wealth Should Defend
Status: surviving, 08 May 2026
Problem: How would you convert billions of dollars into actual impact on the future of AI?
Money buys attempts, not impact. Impact requires new knowledge, information that has causal power, and no reliable method exists for producing it (Popper).
An organization that pretends otherwise will be chasing outcomes it has no method to engineer. Any successes will come by accident, in spite of itself, alongside larger quantities of confident error. The discipline the organization needs is structural: its grants, its decision rights, its theory of impact, and its succession must all reflect the fact that nobody knows in advance which interventions will produce knowledge. The design must be built around that ignorance, not around a confident plan to overcome it.
Why defensive
Knowledge creation cannot be engineered, but knowledge destruction or prevention has identifiable causes such as censorship, monopoly, centralized decision-making, suppression of criticism, lock-in of bad ideas, destruction of the people and traditions that carry tacit knowledge (Lysenko). The asymmetry is structural: destruction is a bounded event with legible mechanisms, while creation is an open-ended process whose causes cannot be isolated. We have a richer comparative record on what kills inquiry than on what produces it, because failures are more numerous and better documented.
David Deutsch calls societies that institutionalize error-correction "dynamic." Their opposite, the "static society," is structured to suppress the criticism that would generate change. Static societies are the default; dynamic ones are rare and fragile. The asymmetry matters for spending: producing new knowledge at will requires methods we don't have, but defending the conditions under which knowledge is created requires methods we partly have. If you take this seriously, the central risk of AI isn't technical misalignment, but it's that AI makes centralization and suppression cheap and effective at a scale previously impossible (Acemoglu & Johnson; Freedom House). An organization that takes this asymmetry seriously will spend on defense, not on outcome-bets it has no way to underwrite.
The threat: four mechanisms
If the central risk of AI is that it accelerates the formation of a static society then the threat decomposes into four mechanisms.
Reality corruption. Criticism requires contact with reality. AI weakens that contact by making evidence cheap to fake, histories easy to rewrite, records easy to flood, and shared facts harder to stabilize. The result is not that everyone believes lies, but that truth loses public force.
Criticism capture. Criticism requires channels through which errors can be exposed, understood, and acted on. AI can capture those channels by filtering, ranking, redirecting, personalizing, saturating, or invisibly suppressing criticism before it becomes socially effective.
Agency pacification. Criticism requires people who still feel the need to understand, judge, object, and act. AI can pacify that need with synthetic explanation, intimacy, achievement, outrage, and agency i.e. emotional substitutes for real problem-solving.
Responsibility dissolution. Criticism requires someone to be answerable. AI can dissolve responsibility into models, dashboards, risk scores, committees, and "the system," so that institutional errors persist without any criticizable decision-maker.
AI becomes dangerous to an open society when it helps institutions survive criticism without learning from it, and helps individuals feel informed and satisfied without actually increasing their power to correct errors.
The test
The test for any intervention an organization might fund: does it engineer outcomes, or does it remove constraints on error-correction?
Outcome-engineering picks the answer in advance. It says: this is the AI safety theory worth backing; this is the right alignment technique; this is the institution that will steer the field correctly. Outcome-engineering crowds out alternatives the organization cannot evaluate and locks in theories it has no method to verify.
Constraint-removal does the opposite. It identifies the conditions under which error-correction breaks down - evidence becomes fakeable, dissent gets suppressed, agency is substituted for, responsibility is diffused, and funds infrastructure that restores those conditions without specifying which corrections should win. It is the only form of intervention consistent with fallibilism at this scale, because it concedes what the organization cannot honestly deny: it does not know which answers are right.
Two interventions that pass the test
The Public Evidence Ledger addresses reality corruption. The organization funds open, redundant infrastructure for authenticating public records at the moment of creation: capture and signing tools, secure timestamping, cryptographic provenance standards (Coalition for Content Provenance and Authenticity), storage for public-interest evidence, and independent audits of the system itself. The question it answers is whether a given artifact existed in a given form, at a given time, from a given source. It does not adjudicate meaning. The hard constraint is that it must not become a Ministry of Truth — it authenticates provenance, not interpretation. It passes the test because it removes the constraint of fabricability without specifying which interpretations of authenticated evidence should prevail.
The Whistleblower and Dissent Defense Fund addresses criticism capture. The organization funds legal counsel, emergency income replacement, secure disclosure channels, expert review, and public-interest litigation for people inside AI labs, agencies, contractors, and oversight bodies who expose serious AI-related errors (AI Whistleblower Initiative). Claims are investigated adversarially; the fund does not sanctify dissent. It passes the test because it removes the constraint of retaliation (Reuters on OpenAI whistleblowers) without specifying which dissents are correct.
Begin both immediately. Defensive infrastructure is cheaper to build before a static society forms than to restore after.
The frontier
Agency pacification and responsibility dissolution remain open problems. I have no constraint-removal intervention for either that I would currently fund. Any program designed to make people care, or to assign responsibility to a specific actor, slides back into outcome-engineering.
The organization itself
An organization defending open society against AI-enabled centralization must itself be structurally anti-sovereign. It must be divided at inception into separately governed funds that can criticize each other's grants. It must commit to a finite spend-down horizon so that current trustees cannot bind future ones, and publish its grant criteria, its rejected proposals, and its own ongoing self-criticism. The defense the organization offers society is the same defense it imposes on itself: structural inability to exercise discretion it does not have the knowledge to exercise.