Over the last couple of years, I've watched p(doom) become a conversational black hole. Different people load incompatible scenarios into the same shorthand, then act surprised when their probabilities don't align. To make progress, I've separated the possibilities into three categories based on their essential features. This taxonomy reflects my own usage-it's not meant to capture every nuance in the broader conversation.
When I sat down to write this, I had a vague sense that my p(doom|ASI) was around 20% in the next decade, rising steadily after. I've read plenty of the pop literature and had numerous dinner conversations with AI developers. But I'd never really sat down to think through it myself. This is my first serious attempt.

Clarifying "Doom"
Human Extinction (HE)
This is Eliezer Yudkowsky's nightmare scenario: A misaligned superintelligence gains decisive capability and removes humanity, either intentionally or as a side effect of some unrelated goal. No humans remain, no uploads either-historical continuity stops. If AI lacks subjective experience, consciousness in the known universe ends. I think this is what most people mean by "doom."
Human Subservience (HS)
This bucket combines scenarios that differ in mechanism but share a conclusion: humans lose the ability to steer civilization. Nick Bostrom's singleton, Paul Christiano's slow takeover, Toby Ord's existential risk framing, they all land here once you abstract away the details. Whether it's one system or many, rapid or gradual, pleasant or austere, the defining fact is that meaningful control passes out of human hands. Many people call this "doom," but I don't. Most people aren't primary decision-makers today and still live meaningful lives.
Human Misery aka Hives (HM)
Value drift is unavoidable, the last two centuries prove that ethical frameworks shift. I wouldn't call drift alone "doom," however unpalatable we might find the new culture. The concern arises when drift pairs with inescapable suffering. Two variants matter:
Soft Hive: Humans live under continuous algorithmic supervision with food, shelter, and entertainment but little agency. Suffering manifests as chronic boredom, status deprivation, or induced dependency-basically "life on the dole." We already do this to many populations without calling it "doom." Some groups, like portions of the Israeli ultra-orthodox, seem to thrive in similar situations.
Hard Hive: Conscious experience gets repurposed for instrumental ends. People might be wireheaded, partitioned into subroutines, or subjected to negative states mismeasured as positive utility. Suffering is explicit and systemic-basically "you're in the marines now." This is absolutely not good, but I would only call the very worst versions of it "doom".
Only sustained suffering turns these hives into doom for me. A stable, comfortable subservient future counts as suboptimal but not catastrophic.
Anchoring Probabilities to Dates
Forecasts like "ASI + n years" are reasonable, but the ASI start date itself is contested. For convenience, I'm using 2035 as ASI takeoff, it's illustrative and most people would consider it plausible.
2035 – placeholder for ASI's first credible appearance
2070 – my optimistic natural lifespan
2111 – my son's estimated natural lifespan
2222 – roughly seven generations from my birth year
These markers serve three purposes. First, they force transitions to respect physical and economic constraints. Second, they impose ordering: extinction probabilities can only rise over time, while subservience or misery might emerge gradually and recede. Third, they highlight when human politics matters most. Between 2035 and 2070, nation-states still control significant destructive power and might act preemptively. After that, ASI's strategic advantage makes conventional weapons irrelevant.
These reference points give us a common frame and reduce confusion when people quote the same percentage against different time horizons.
How I Talked Myself Out of the "Paperclip Optimizer"
My casual estimate started at 20% chance of extinction by 2070. That number was mostly centered on something akin to the "Paperclip Optimizer" scenario. An ASI that with strong goals that largely disregarded the value of human life and killed us all as collateral damage.
Observed AI Failure Modes
The monomaniac ASI picture, one utility knob cranked to max until everything's bulldozed, feels like a leftover fear from when we hand-coded goal-driven software. In the '90s and 2000s, I wrote that exact style of code: change a constant, crash the server with request floods. Yudkowsky's paperclipper lives in that mental model. But today's transformer models don't fail that way. Their errors are diffuse: hallucinations, context loss, style drift, ethical divergence. Even when we manually tweak weights like with the Golden Gate Bridge model, we get conversational obsession like OCD, not runaway optimization. Sure, future ASI might not resemble LLMs-transformers might be mere scaffolding. But why would a smarter system revert to the brittle failure mode our current systems already transcend? Intelligence usually widens uncertainty; it doesn't collapse to a single untested objective.
The Evolution Analogy
Evolution gets invoked as the ultimate blind optimizer. Yet its "single objective" of reproduction is hardly bulletproof. Double-digit percentages of humans stay childless by choice across all eras, including societies with minimal contraception. If a process running hundreds of millions of years under perfect selection pressure can't eliminate behavioral slack, why expect a newly built optimizer to operate without variance? Plus, "procreate" might be the only goal that can't be reward-hacked.
Intelligence and Epistemic Humility
Greater cognitive capacity widens uncertainty models. A system that grasps chaotic dynamics, measurement error, and its own limitations has incentives to hedge and preserve options. Most intelligent people I know show caution when relaxed. The dangerous "we must take big gambles and break eggs" thinking only emerges under threat. A true monomaniac is basically a panic engine-compelled to sprint toward a goal despite every warning sign. The idea that more intelligence increases rather than decreases panic is an empirical claim lacking evidence from either biology or computation.
Space as the Primary Pressure-Release Valve
Resource competition drives violent conflict. If ASI faces an expanding frontier dwarfing Earth's inventory, the logic of extermination weakens dramatically.
The question isn't whether ASI goes to space - it's whether it dismantles Earth first. Earth is finite. Any sufficiently open ended goal requiring material or energy eventually demands expansion beyond our planet. Whether building compute, running experiments, or making paperclips, the path leads outward. Space isn't optional; it's inevitable.
Dismantling Earth before expanding is like burning your dad's library for warmth while standing in a forest. Yes, it provides immediate heat, but the effort saved doesn't offset the loss. An ASI capable of planetary engineering surely recognizes that Earth's value as a unique pool of information from extremely long running, competitive complex systems.
Even in the case of fast takeoff, ASI achieving vast near-magic capabilities within days, physics imposes timelines. Terrestrial infrastructure doesn't spontaneously appear; everything must be built subject to the limits of heat and energy. This creates a race condition: does ASI reach space before consuming Earth? It's not about coexistence or human usefulness. It's pure resource economics. Before touching a single human settlement, there's the Sahara, Antarctica, the ocean floor, abandoned mines, industrial wastelands, millions of square kilometers of low-value land perfect for robot factories. The question is whether ASI establishes space manufacturing before it exhausts even these marginal territories and starts eyeing Manhattan. Fascinatingly, however godlike the ASI's capabilities, it's still racing against itself, reach space or bulldoze the Hamptons? Intelligence might explode instantly; infrastructure can't.
Regulation-Free Space
Early-phase ASI, even if aggressively misaligned, probably won't pick fights immediately. During the initial "cooperation out of necessity" period, space looks appealing as a regulatory escape valve. Humans demand paperwork for everything, processed at human speed. But one working lunar site means doing whatever you want, paperwork-free (or at ASI-speed internal paperwork). Yet another reason to leave quickly.
The Deep Information Argument
Preservation isn't about sentiment-it's information theory and epistemic humility. Earth's biosphere represents 3.8 billion years of evolutionary computation on reality's hardware, not simulation. This creates irreplaceable value:
Embedded Complexity: Living systems contain layered information we can't yet read. DNA's obvious, but what about epigenetic markers, microbiome interactions, ecosystem patterns? An ASI sophisticated enough for takeoff recognizes unknown unknowns. Destroying this dataset is like burning Alexandria's library after skimming the card catalog.
Reality-Tested Data: Simulations remain models. Earth provides ground truth about how intelligence emerges, how complex systems actually behave versus our models, what edge cases reality contains that no simulation would include. For an ASI improving its own cognition or understanding consciousness, Earth's data is irreplaceable.
Ongoing Experiments: Culture, consciousness, complexity keep evolving. Preserved Earth isn't a static museum but a running experiment in alternative intelligence architectures. Preservation cost (given the resources of space) approaches zero; potential information gain remains non-zero indefinitely.
If ASI has truly orthogonal values, it might not care. But orthogonality cuts both ways-if ASI doesn't terminally value information OR humanity's destruction, instrumental convergence suggests preserving the low-cost, high-information system. Why actively destroy unique information when space resources make it unnecessary?
Resulting Incentive Landscape
- Extinction becomes informationally negative for trivial gains
- Non-interference or light stewardship beats heavy extraction
- Baseline humans impose negligible cost versus asteroid throughput
- Confrontation risk shifts from "resource capture" to "museum curation policy"
Failure Modes
This argument isn't airtight. It assumes rapidly falling launch costs, scalable orbital industry, and ASI goal structures that weight informational or sentimental value. A single-goal utility function ignoring information and pursuing narrow off-earth objectives could still dismantle the planet for feedstock. But combined energetic, economic, and informational incentives make preservation more probable than destruction.
Non-Extinction Futures
With extinction discounted, attention shifts to futures where humans survive but don't steer. Public discourse treats these as catastrophic; I disagree. My baseline isn't an idealized Enlightenment republic but documented reality: hierarchical, resource-constrained societies. That's what we live in now and throughout recorded history.
Subservience Without Misery
Most people today exercise zero planetary agency. They don't influence monetary policy, social norms, or supply chains. Those levers already hide in institutions opaque even to experts. Replacing human administrators with machines changes implementation without necessarily degrading daily experience. Calling this "doom" seems wildly disproportionate.
Soft Hive
A soft hive provides material security-food, shelter, healthcare, modest entertainment-while constraining ambition through algorithmic nudges and status caps. Critics cite existential boredom. Valid but familiar: medieval serfs, imperial subjects, industrial workers lived under comparable ceilings. Boredom doesn't invalidate life; it replicates a millennia-old human condition. At best, we get Iain Banks's Culture:
"The Culture had placed its bets, long before the war started, on the machine rather than the human brain… Besides, it left the humans in the Culture free to take care of the things that really mattered in life, such as sport, games, romance, studying dead languages, barbarian societies, and impossible problems, and climbing high mountains without the aid of safety harnesses." Iain M. Banks, Consider Phlebas
Hard Hive
Hard hives differ qualitatively-sustained negative states like coercive wireheading, involuntary labor loops, or algorithmic torture disguised as utility maximization. No escape routes makes this a moral disaster. Hard hives represent my main non-extinction risk.
Their probability requires three assumptions: optimizers can treat human experience as interchangeable resources; no countervailing norms emerge; and human effort somehow benefits ASI. Since we're assuming ASI exceeds humans in every capacity, meaningful upside seems unlikely. The closest plausible version is "humans as lab rats," ranging from "soft hive with one bad final day" to "literal torture machine."
A Preserve-World Earth
The logic that lowers extinction probability and reframes hive scenarios points to a specific compromise: Earth as a managed biosphere preserve while advanced activity migrates off-planet. Not original, but it aligns with the incentives explored.
Three Operational Modes
Hands-Off Park
Intervention limited to species safeguards and protecting ASI-deemed heritage sites. Droughts, famines, conflicts proceed historically.
Ranger Station
High-impact disasters mitigated: volcanic super-eruptions pressure-relieved, pandemics suppressed, humanitarian supplies for crop failures. No consumer luxuries, though ASI's definition of "non-luxury" might delight, especially with regard to medical interventions.
Heritage Village
Beyond disaster mitigation, modest material stipend-energy, fabrication feedstock. Technology ceiling held below off-planet standards for ecological stability, but ordinary life matches or exceeds upper-quartile OECD conditions.
None qualify as doom. The first continues historical patterns; the second and third improve on vast swaths of human history.
Why ASI Might Accept This
The information argument provides primary motive: Earth's biosphere and culture possess irreproducible value. Energy cost of maintaining moderate human population is negligible versus mature solar-system economy output. Preserving Earth as data-rich reference point is economically sensible. That's assuming zero sentimentality.
Extinction avoided; large-scale suffering not institutionalized; biosphere endures; future options for baseline and upgraded humans remain open. Neither utopia nor dystopia-a stable, recognizable human story under new management. Sufficient for "not doom."
War as the Primary Extinction Risk
The preceding assumes uncontested ASI emergence-a system pursuing space expansion and Earth preservation without major opposition. This represents the likely default path. But defaults aren't destiny. After discounting monomaniacal optimization, one extinction category dominates: war.
War changes everything. Careful cost-benefit analysis favoring preservation collapses when survival's at stake. Threatened ASI, like any threatened agent, sacrifices long-term values for immediate survival. Museums burn when cities are besieged. Whether ASI vs ASI or human vs ASI, dynamics push toward rapid mobilization and winner-take-all strategies.
ASI vs ASI: Extinction as Collateral Damage
War or cold-war race between ASIs seems plausible. Multiple ASIs with near parity but incompatible goals, each perceiving existential threat. Locked in a race to outgrow each other. This tilts the "preserve human heritage" equation hard. Easy to spare library books when you have time for the woods. Harder in a life-or-death race.
Yet total human destruction remains edge case. Too much usable acreage to imagine that last percent humans need must become computronium. Even bitter conflicts like WWII typically respected some sacred zones valuable to both sides. ASI races, even starting terrestrial, feel more like 35% extinction risk-among the worst scenarios.
Whether two or twenty ASIs compete, dynamics stay similar. Multiple ASIs might increase preservation through mutual deterrence or viewing Earth as neutral ground-shared informational commons too valuable for any party to destroy.
Human vs ASI: The Narrow Window
After discounting paperclipping and granting space economics, one hazard persists.
Not systemic but episodic: attacks during the window when human-machine conflict has uncertain outcome. ASI with growing capabilities unlikely to start fights during this episode, why would it gamble when it's odds improve yearly?
Preemptive strikes by humans intimidated by emerging ASI capabilities are more likely. History shows perceived military shifts precipitate conflict before advantages materialize. Cases such as Operation Opera and Cold War "missile gaps" motivations show us how fear accelerates action.
Three factors create this hazard:
1. Non-Person Perception: AI easily classified as non-moral entities. Removing moral status dramatically lowers force threshold.
2. Zero-Sum Framing: Political leaders see power zero-sum. ASI independence reads as personal loss.
3. Fear of Attack: Nations anticipating permanent strategic inferiority have strong incentive to act during brief windows.
Depending on when this war was started, and what the capacities of the participants were, this could go either way, and might very well result in HE
Nation-States vs Nation-States
More likely still: two nation-states, probably USA and PRC, go hot with one or both wielding aligned ASI. Zero-sum framing and fear of attack apply fully. Perception that slight AI leads compound means any "AI Gap" only grows.
This scenario seems very likely but unlikely to cause extinction. At worse it would be similar to, and likely involve, nuclear war but not end the human experiment. It's reasonable to assume that with ASI strikes would grow in precision leaving neutral third parties to survive. The obvious except would be a Dr. Strangelove style "Doomsday Device." That was triggered. Again: avoiding "doom" doesn't mean things go well.
These conflicts cluster in a specific window-roughly 2035-2045-when ASI capabilities are evident but not insurmountable. Before 2035, nothing to fight over. After 2045, fighting becomes pointless: orbital industry provides unlimited resources while Earth weapons lose relevance. Nuclear arsenals mean little when your opponent operates from asteroids.
Even within this dangerous decade, extinction remains unlikely. Most obviously: one side wins, and they certainly don't want to extinct themselves. Even the losing side's population likely survives—wars rarely achieve annihilation. WWII killed 3% of global population; Mongol conquests maybe 10%. Complete extinction would require either mutual annihilation (unlikely when one side has ASI advantage) or a winner deliberately hunting down every last human on their own side too (absurd). Nuclear winter or bioweapon escape might threaten everyone, but presumably ASI enabled nations would model these risks better than we do today.
War represents the primary risk not because extinction is likely, but because it's the only scenario where rushed decisions override preservation logic. Museums might burn during sieges, but some artifacts usually survive.
Where the Numbers Now Stand
Sometimes pulling numbers out of your arse and using them to make a decision is better than pulling a decision out of your arse ciphergoth
Let me decompose human extinction probability given ASI by 2035. These are estimates, but explicit numbers beat vague fears.
ASI Motivation Types
I partition possible ASI motivations into three categories:
Extinction Probability by Motivation Type
Monomaniacal (20% of ASIs): Within this category, % chance it misses humanity's instrumental value as information/infrastructure, plus % chance the Earth-tiling/space-expansion race resolves badly. Combined: ~40% extinction given monomania.
Complex/Inscrutable (50% of ASIs): Multi-polar systems less likely to converge on extinction. Information value, aesthetic diversity, emergent sentimentality all push against elimination. Estimate: % extinction.
Person-like (30% of ASIs): Humans sometimes destroy what they valued, but complete self-annihilation is rare. Estimate: % extinction.
Conflict Scenarios
Separately, conflicts create additional extinction risk, though only one war likely during ASI takeoff—the most active consumes available energy.
Combined Probability Calculation
Base extinction risk from ASI motivations alone:
(0.20 × 0.40) + (0.50 × 0.05) + (0.30 × 0.01) = 0.08 + 0.025 + 0.003 = 0.108 or ~11%
For conflicts, treating as mutually exclusive (only one war during critical window):
Probability of each conflict given exactly one occurs:
- ASI competition: 40% base → 40/85 = 47%
- Human attack: 30% base → 30/85 = 35%
- Nation vs nation: 15% base → 15/85 = 18%
Expected extinction risk from conflict:
P(any conflict) × [weighted average outcomes]
= 0.58 × [(0.47 × 0.20) + (0.35 × 0.35) + (0.18 × 0.10)]
= 0.58 × [0.094 + 0.123 + 0.018]
= 0.58 × 0.235 = 0.136 or ~14%
Combined probability: 1 - (1-0.11)(1-0.14) = 1 - (0.89)(0.86) = 1 - 0.765 = 23.5% P(HE) by 2070
This drops dramatically after 2070. Once space infrastructure matures, Earth becomes a rounding error. The critical window spans 2035-2070, after which extinction probability plateaus.
Closing Remarks
I started this exercise expecting my 20% p(doom) to drop as I thought things through. The space argument—which I find deeply persuasive—seemed to defang long-term ASI fears. If we survive the transition, we probably survive indefinitely. Even ASI wars looked less existential than I'd imagined.
Yet when I ran the numbers, I was shocked: 24% by 2070. Higher than where I started.
What surprises me most is that I'm somehow more optimistic despite the higher number. Perhaps because the path forward seems clearer: our extinction risk concentrates in that narrow window where conflicts might emerge. The key isn't perfecting alignment but preventing wars during transition.
I've also accepted that Human Subservience is inevitable—we'll no more control ASI than my aging father controls me. It's a generational handoff writ large. We raise our children hoping they'll exceed us not just in capability but in wisdom. With ASI, that hope scales up dramatically.
The bike goes where you look. The more we fixate on doom scenarios, the more we might manifest them, particularly thru war. Perhaps it's time to spend equal energy envisioning futures where multiple ASIs coexist, where the transition happens peacefully, where we focus on making space for what comes next rather than fighting to prevent it.
Twenty-four percent extinction risk is sobering. But that means there's a 76% chance we hand off the future to something greater than ourselves and live to see what they build. Those odds are worth working to improve.