As politicians and pundits debate about reopening America, some make the case that we can and should reopen soon because the dire predictions turned out to be wrong. On the face of it, this seems like good reasoning: things are not as bad as predicted, so we can start reopening sooner than predicted. To use an analogy, if a fire was predicted to destroy much of your house, but it only burned your garage to the ground, then it is time to start planning on rebuilding and moving back in. While this line of thought is appealing, it also can be a trap. Here is how the trap works.
Some politicians and pundits are pointing out that the dire predictions did not come true—for example, the governor of Florida recently noted that the hospitals were not overwhelmed as predicted and he wants to allow them to return to money making elective surgery. He also, like some other Republican governors, wants to reopen very quickly. This reasoning does initially seem sensible: the pandemic was not as bad as predicted, so we can quickly reopen. There are also those who sneer at the dire predictions and are upset at what they see as excessive precautions. This can also seem sensible: the experts predicted a really terrible outcome for COVID-19, but they were wrong. We overreacted and should roll back the precautions. So, re-open America.
While it is reasonable to consider whether the precautions are excessive and to update our assessment of when to re-open, there is a tempting fallacy that needs to be avoided. This can be called “the prediction fallacy.” It occurs when someone uncritically rejects a prediction and responses to the prediction when the outcome of a prediction turns out to be false. The error in the logic occurs because the person fails to consider what should be obvious: if an effective response is made to a prediction, then the prediction is going to be “wrong.” The form of the fallacy is this:
Premise 1: Prediction P predicted X (if we do not do R).
Premise 2: Response R was taken based on prediction P.
Premise 3: X did not happen, so prediction P is wrong.
Conclusion: We should not have taken Response R (or no longer need to take Response R).
To use a concrete example:
Premise 1: Experts predicted that the hospitals in Florida would be overwhelmed (if we did not respond with social distancing and other precautions).
Premise 2: People responded to this prediction with social distancing and other precautions.
Premise 3: The hospitals in Florida were not overwhelmed, so the prediction was wrong.
Conclusion: The response was excessive and we no longer need to practice all those precautions and can open Florida up.
While it is (obviously) true that a prediction that turns out to be wrong is wrong, the error is to uncritically conclude that this proves that the response based on the prediction need not have been taken (or that we no longer need to keep responding in this way). The prediction in question assumes that we do not respond (or do not respond a certain way) and the response is intended to address the prediction. If the response was effective, then the predicted outcome without it would not occur—that is the point of responding. To reason that the “failure” of the prediction shows that the response was mistaken or no longer needed would be a mistake in reasoning. You could be right—but you need to do more than to point to the failed prediction.
To use a silly, but effective analogy, imagine that we are in a car and driving towards a cliff. You make the prediction that if we keep going, we will go off the cliff and die. So, I turn the wheel and avoid the cliff. If backseat Billy gets angry and says that there was no reason to turn the wheel or that I should turn it back because we did not die in a fiery explosion, Billy is falling for this fallacy. After all, if we did not turn, then we would have died. And if we turn back too soon, then we die.
The same applies to COVID-19: by responding effectively to dire predictions, we change the outcome and the predictions turn out to be wrong. But to infer that the responses were excessive or that we should stop now simply because the results were not as dire as predicted would be an error.
This is not to deny what is obviously true: it is possible to overreact to COVID-19 and there will, one hopes, come a time we can end some of the responses. But making these decisions based on the prediction fallacy would be a bad idea. There is also another version of this fallacy.
A variation on this fallacy involves inferring the prediction was a bad one because it turned out to be wrong:
Premise 1: Prediction P predicted X if we do not do R.
Premise 2: Response R was taken based on prediction P.
Premise 3: X did not happen.
Conclusion: The prediction was wrong about X occurring if we did not do R.
While the prediction turned out to be wrong in the sense that the predicted outcome did not occur, this does not disprove the prediction that X would occur without the response when the response occurred. Going back to the car example, the prediction that we would die if we drove of the cliff if we do not turn is not disproven if we turn and then do not die. In fact, that is the result we want.
Getting back to COVID-19, the predictions made about what could occur if we did nothing are not disproven by the fact that they did not come true when we did something. So, to infer that these predictions must have been wrong in predicting what would occur if we did nothing would be an error. We do, of course, rationally assess predictions based on outcomes—but this assessment cannot ignore considering the effect of the response when judging the prediction. Sorting out such counterfactual predictions is hard—in complex cases you can probably never prove what would have happened, but good methods can guide us here—which is why we need to go with science and math rather than hunches and feelings.
This fallacy does draw considerable force from psychological factors, especially in the case of COVID-19. The response taken to the virus comes with a high cost and we want things to get back to normal—so, ironically, the success of the response makes us feel that we can stop now or that we did not really need such a response. As always, bad reasoning can lead to bad consequences and in a pandemic, it can hurt and even kill many people.
Stay safe and I will see you in the future.