The Doomsday Argument is even Worse than Thought
I have a weird idea: That the doomsday argument is far worse than typically explained.
The doomsday argument goes something like this: Start from 0 information, ask yourself what a good prior should be for how long humanity will last. Consider that, apriori, there is nothing particularly special about you or your time. You are just as likely to be among the first humans as among the last humans. If you are equally likely to be anywhere in the distribution of all humans, there is a 90% chance that you are in the middle 90%.
Assuming the best case, that we are at exactly the 5% mark of all humans that will ever live, then we can estimate humanity will last about 15,000 years at the current birth rate. In the worst case, if we are just at the last 5% of humans to ever live, the world will last 41 more years at the current birth rate. And if we are in the exact middle, humanity has about 770 more years.
And, again, assuming there is nothing particularly special about you or your time, then there is a 90% chance the true value will fall somewhere between those two estimates. And again, this is before we process any additional information to update our estimates. I view this as a sort of uninformative prior, to apply in the absence of any other information. A starting point to add other information too.
xkcd makes an interesting point about this argument:
Almost everyone who hears this argument immediately sees something wrong with it.
The problem is, everyone thinks it’s wrong for a different reason. And the more they study it, the more they tend to change their minds about what that reason is.
Since it was proposed in 1983, it’s been the subject of tons of papers refuting it, and tons of papers refuting those papers. There’s no consensus about the answer; it’s like the airplane on a treadmill problem, but worse.
But it’s worth considering the same idea applied to other areas. Imagine I went around randomly picking objects, figuring out how old they are, and applying the doomsday argument. I pick a tree that’s 10 years old, and estimate there is a 90% chance it will last between 6 months to 10 years. And I pick a random bridge and estimate how long it will stand. And on average, my 90% confidence intervals should be correct about 90% of the time.
In fact this was famously done by J. Richard Gott, who made a pretty reasonable prediction of when the Berlin wall would fall using this method.
The same math was also famously applied by the Allies in World War II. They used it estimate the number of tanks produced by the Germans from their serial numbers. They were able to estimate the German production of tanks to within 1 tank of the true value. The doomsday argument is just a special case of the same math, where the number of tanks equals one.
It occurs to me that the original formulation of the Doomsday argument is wrong. We are not randomly distributed among all humans who will ever live. Most humans who have ever lived, would never have even thought of the Doomsday argument. The doomsday argument wasn’t even invented until 1983.
Assume instead that we are distributed amongst all people to ever think about the Doomsday argument. This is a much smaller sample. Let’s assume that the number of people exposed to it is constant over time since 1983.
Now the numbers work out very differently. The 90% confidence interval of the date the last person will think about the Doomsday argument, is between 2017-2643. There is a 50% chance it will occur before 2049, and a greater than 70% chance it will occur before the end of the century.
I don’t really know what to make of this information. Again I like to view it as an uninformative prior. A starting point to do bayesian updates too, when you add more information. But when I do add more information, like my beliefs about existential risks like AI, etc, the result gets even worse.
Thinking about anthropics really weirds me out. It’s not intuitive at all, and the results just feel wrong. But still, it does seem really unlikely, that out of all the trillions and trillions of humans that could someday exist, I happened to be born into the first 1%, or even 0.01%. The odds of that are, well, less than 1%.
But if humanity ends in the next century or so, well, being born now isn’t terribly unlikely at all. If that’s the case, then right now is a relatively unremarkable point in time to exist. Earth’s population is larger than ever and growing, and that timeline puts us nearish to the middle of all humans that will ever live. And especially nearish the middle of all humans who will think about this argument…