What's Your p(doom)?

How worried should we be about AI?

In rationalist and AI safety circles, p(doom) is shorthand for a deceptively simple question: what is your personal probability estimate that advanced AI leads to human extinction or permanent civilizational collapse? Eliezer Yudkowsky, the most prominent voice in AI alignment research, has put his somewhere above 90%. Dario Amodei, CEO of Anthropic, has described the stakes as "the most important century." Marc Andreessen published a Techno-Optimist Manifesto in 2023 arguing that AI accelerationism is a moral imperative. The discourse has fractured far beyond a single number — and the fractures reveal genuinely different beliefs about intelligence, governance, human nature, and what kind of future is worth building.

This test measures your orientation toward AI existential risk across six dimensions. It is not asking you to pick a number. It is mapping which intellectual traditions and risk intuitions actually shape how you think about the future of artificial intelligence. Answer the 36 statements based on what you actually believe — not what you think is the most defensible position at a dinner party.

Question 1 of 36

Intelligence does not imply benevolence; a superintelligent system will likely pursue goals that are indifferent or hostile to human survival.

Strongly Disagree

Strongly Agree

The concept of p(doom) emerged from the rationalist community centered around LessWrong and the Machine Intelligence Research Institute (MIRI), founded by Yudkowsky in 2000. The core argument is straightforward: if we build systems significantly more intelligent than humans, and if those systems are not precisely aligned with human values, the default outcome is catastrophic. Yudkowsky's case rests on two philosophical claims — the orthogonality thesis (intelligence and goals are independent, so a superintelligent system can have any objective) and instrumental convergence (almost any goal leads a sufficiently intelligent agent to seek self-preservation and resource acquisition). Nick Bostrom formalized much of this reasoning in Superintelligence1, which laid out the "control problem" — the challenge of ensuring that a system smarter than its creators remains within the boundaries its creators intended. This intellectual tradition informs the Alignment Pessimism dimension of this test: the conviction that the alignment problem is not merely unsolved but may be unsolvable in the time we have.

The opposing pole is Accelerationist Optimism, rooted in a different reading of technological history. Andreessen's 2023 Techno-Optimist Manifesto articulated the position that intelligence amplification is the single most powerful tool for solving human problems — poverty, disease, scarcity, environmental collapse — and that slowing AI development is not caution but negligence. The effective accelerationist (e/acc) movement extends this argument: any attempt to restrain AI progress costs lives that faster progress would have saved. This is not naivety about risk; it is a conviction that the expected value of rapid development is overwhelmingly positive, and that the people most worried about AI tend to be the least informed about how it actually works.

Between these poles sit four positions that resist simple placement on a doomer-to-optimist axis. Governance Institutionalism treats AI risk as real but manageable through international coordination, analogous to nuclear nonproliferation or climate agreements. The 2024 International AI Safety Report, led by Yoshua Bengio and signed by researchers across institutions, represents this position: the risks are significant, the technical challenges are genuine, but the solution is governance — binding agreements, compute monitoring, safety standards — not despair. The EU AI Act, the UK AI Safety Institute, and proposals for international AI governance bodies all embody this orientation.

Compute Overhang Anxiety captures a more specific fear: that current hardware already supports dramatically more capable AI systems than exist today, and that a sudden algorithmic breakthrough could trigger a capability jump that outpaces all safety efforts. Leopold Aschenbrenner's 2024 essay Situational Awareness gave this anxiety its sharpest articulation, arguing that we are already inside the critical window — that the gap between current AI and transformative AI is measured in years, not decades, and that most institutions are not treating this with the urgency it demands. This dimension is distinct from Alignment Pessimism: you can believe alignment is solvable in principle while still worrying that a fast takeoff will not give us time to solve it.

Pragmatic Incrementalism rejects the grand theorizing of both doomers and accelerationists in favor of concrete engineering work. This orientation is informed by the actual practice of AI safety teams — at Anthropic, DeepMind, OpenAI, and elsewhere — who are building interpretability tools, developing red-teaming methodologies, and testing alignment techniques on current systems. The incrementalist position is that abstract arguments about superintelligence are less useful than empirical safety research on the systems we have now, and that iterative progress on techniques like RLHF, constitutional AI, and mechanistic interpretability will generalize to more capable systems. Dario Amodei's 2024 essay "Machines of Loving Grace" represents a version of this view: AI development is genuinely dangerous, but the path to safety runs through careful, ambitious building — not through pausing or panicking.

Existential Indifference is not ignorance about AI. It is the informed position that the existential risk framing is dramatically overblown — a secular eschatology dressed in technical language. Toby Ord estimated existential risk from "unaligned artificial intelligence" at one in ten over the next century in The Precipice2, but critics argue that these estimates are built on a chain of speculative assumptions — about intelligence, agency, goal-directedness, and takeoff speed — each of which may be wrong. The existentially indifferent position holds that current AI is sophisticated pattern-matching, not nascent agency; that superintelligence timelines are wildly optimistic; and that the real AI harms worth worrying about are present-day: algorithmic bias, labor displacement, surveillance, and concentration of power. This is a substantive intellectual position, not a failure to take the question seriously.

These six dimensions interact in ways that a single p(doom) number cannot capture. A person can score high on both Governance Institutionalism and Compute Overhang Anxiety — believing that institutions can manage AI risk while simultaneously worrying that a fast takeoff will outrun them. Someone can score high on Pragmatic Incrementalism and low on Alignment Pessimism — believing that safety is achievable precisely because the engineering approach is working. The full profile reveals which intellectual traditions actually shape your thinking, which risks feel most visceral, and where your reasoning about AI's future comes from.

This test uses 36 Likert-scale items, six per dimension, with responses transformed into factor scores using empirical loadings and then converted to population-normed percentiles. Your dominant orientation is determined by your highest-scoring dimension, but the full profile — your secondary and tertiary orientations, the tensions between them, the dimensions where you score lowest — tells a richer story than any single label. Most people hold a complex and sometimes contradictory set of beliefs about AI risk. This test is designed to map that complexity rather than flatten it.

Footnotes

  1. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN: 978-0199678112

  2. Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books. ISBN: 978-0316484923

What's Your p(doom)?

Why Use This Test?

  • This psychometrically normed test measures your orientation toward AI existential risk across six dimensions. Your percentile scores reveal which intellectual traditions shape your thinking — from alignment pessimism to accelerationist optimism — and how your position compares to the broader population.