A Field Guide to the AI Safety Camps
Kelsey Piper's essay is the clearest single map of why people who study AI risk disagree so sharply. She lays out three camps. The first, associated with Eliezer Yudkowsky and MIRI, expects a hard, fast takeoff and treats alignment as a problem you likely get one try at; in this view the honest conclusion is to stop. The second, associated with Open Philanthropy thinkers like Paul Christiano and Ajeya Cotra, expects gradual capability gains and locates the real danger in slowly deploying systems we cannot oversee, with techniques like RLHF teaching models to say what we want rather than do what we want; the response is incremental detection and oversight tools. The third, associated with Yann LeCun and others, holds that alignment is not especially hard and tends to fall out of building useful commercial systems. Piper's key observation is that these disagreements are partly empirical and partly sociological, not purely technical, which is why the same evidence does not move everyone the same way.
Why it matters
If you follow AI safety debates and find them confusing, this is the piece that makes the structure legible. It lets you place a new argument in its camp and see which empirical question it actually turns on, rather than treating every disagreement as fresh.