# The Stopped Clock Problem

[Unusually for me, I actually wrote this and published it on Less Wrong first. I’ve never reverse-cross-posted something to my blog before.]

When a low-probability, high-impact event occurs, and the world “got it wrong”, it is tempting to look for the people who did successfully predict it in advance in order to discover their secret, or at least see what else they’ve predicted. Unfortunately, as Wei Dai discovered recently, this tends to backfire.

It may feel a bit counterintuitive, but this is actually fairly predictable: the math backs it up on some reasonable assumptions. First, let’s assume that the topic required unusual levels of clarity of thought not to be sucked into the prevailing (wrong) consensus: say a mere 0.001% of people accomplished this. These people are worth finding, and listening to.

But we must also note that a good chunk of the population are just pessimists. Let’s say, very conservatively, that 0.01% of people predicted the same disaster just because they always predict the most obvious possible disaster. Suddenly the odds are pretty good that anybody you find who successfully predicted the disaster is a crank. The mere fact that they correctly predicted the disaster becomes evidence only of extreme reasoning, but is insufficient to tell whether that reasoning was extremely good, or extremely bad. And on balance, most of the time, it’s extremely bad.

Unfortunately, the problem here is not just that the good predictors are buried in a mountain of random others; it’s that the good predictors are buried in a mountain of extremely poor predictors. The result is that the mean prediction of that group is going to be noticeably worse than the prevailing consensus on most questions, not better.

Obviously the 0.001% and 0.01% numbers above are made up; I spent some time looking for real statistics and couldn’t find anything useful; this article claims roughly 1% of Americans are “preppers”, which might be a good indication, except it provides no source and could equally well just be the lizardman constant. Regardless, my point relies mainly on the second group being an order of magnitude or more larger than the first, which seems (to me) fairly intuitively likely to be true. If anybody has real statistics to prove or disprove this, they would be much appreciated.

# What is a “Good” Prediction?

Zvi’s post on Evaluating Predictions in Hindsight is a great walk through some practical, concrete methods of evaluating predictions. This post aims to be a somewhat more theoretical/philosophical take on the related idea of what makes a prediction “good”.

Intuitively, when we ask whether some past prediction was “good” or not, we tend to look at what actually happened. If I predicted that the sun will rise with very high probability, and the sun actually rose, that was a good prediction, right? There is an instrumental sense in which this is true, but also an epistemic sense in which it is not. If the sun was extremely unlikely to rise, then in a sense my prediction was wrong – I just got lucky instead. We can formally divide this distinction as follows:

• Instrumentally, a prediction was good if believing it guided us to better behaviour. Usually this means it assigned a majority probability to the thing that actually happened regardless of how likely it really was.
• Epistemically, a prediction was good only if it matched the underlying true probability of the event in question.

But what do we mean by “true probability”? If you believe the universe has fundamental randomness in it then this idea of “true probability” is probably pretty intuitive. There is some probability of an event happening baked into the underlying reality, and like any knowledge, our prediction is good if it matches that underlying reality. If this feels weird because you have a more deterministic bent, then I would remind you that every system seems random from the inside.

For a more concrete example, consider betting on a sports match between two teams. From a theoretical, instrumental perspective there is one optimal bet: 100% on the team that actually wins. But in reality, it is impossible to perfectly predict who will win; either that information literally doesn’t exist, or it exists in a way which we cannot access. So we have to treat reality itself as having a spread: there is some metaphysically real probability that team A will win, and some metaphysically real probability that team B will win. The bet with the best expected outcome is the one that matches those real probabilities.

While this definition of an “epistemically good prediction” is the most theoretically pure, and is a good ideal to strive for, it is usually impractical for actually evaluating predictions (thus Zvi’s post). Even after the fact, we often don’t have a good idea what the underlying “true probability” was. This is important to note, because it’s an easy mistake to make: what actually happened does not tell us the true probability. It’s useful information in that direction, but cannot be conclusive and often isn’t even that significant. It only feels conclusive sometimes because we tend to default to thinking about the world deterministically.

Eliezer has an essay arguing that Probability is in the Mind. While in a literal sense I am contradicting that thesis, I don’t consider my argument here to be incompatible with what he’s written. Probability is in the mind, and that’s what is usually more useful to us. But unless you consider the world to be fully deterministic, probability must also be in the world – it’s just important to distinguish which one you’re talking about.