In the previous essay in this series, I presented the argument by elimination and ended with a promise to address how to assess the competition between explanations. The overall method of elimination in this context can be presented in the following form:
Premise 1: There are X (some number) explanations for Y (some phenomenon).
Premise 2: E (an explanation) is the best of the X explanations.
Conclusion: E is (probably) correct.
Sorting out the second premise involves “scoring” each explanation and then comparing these scores to see which one does the best. As noted in the previous essay, to the degree that there are reasonable doubts that all plausible explanations have been considered, there are reasonable doubts that the correct explanation has been found. But the focus of this essay is on the competition.
While the scoring metaphor is useful, scoring explanations is not exact and admits of some subjectivity. As such, reasonable people can reasonably disagree about the relative ranking of explanations. That said, there are objective standards used in assessing explanations.
Conspiracy theorists often use this method and argue their theory best explains the facts. However, problems often arise when all the standards for assessing explanations are applied.
There are obvious defects that any explanation needs to avoid to be a good explanation. At a minimum, an explanation needs to avoid being too vague (lacking adequate precision), ambiguous (having two or more meanings when it is not clear what is intended), and circularity (merely restating what is to be explained). These are minimal standards because an explanation that cannot meet them is not worth considering. For example, if an explanation is too vague, one does not even know what it is saying. There are other standards as well.
One standard is that an explanation needs to be consistent with established fact and theory. As would be imagined, conspiracy theories will almost by definition fail to meet this standard. For example, the conspiracy theory that NASA faked the moon landing goes against established fact. As another example, the view that the earth is flat goes against established fact and theory.
When faced with this standard, conspiracy theorists will often point out that some now established facts and theories were once inconsistent with the facts and theories of an earlier time. On the one hand, they are right to point out that old facts and theories have been overturned and thus this standard is not decisive. On the other hand, the fact that it has occurred in other cases does not prove that a specific conspiracy theory is thus proven. To use an analogy, while it is true that some criminal convictions have been overturned, this does not entail that a specific person is thus innocent. Overturning established fact and theory requires showing that they have defects serious enough to warrant their overthrow—merely pointing out that it has happened does not show that it will happen in any particular case. When explanations compete, the explanation that better matches established fact and theory is better—unless compelling reasons can be given to overturn them.
A second standard is that an explanation needs to keep it simple. This involves avoiding unnecessary assumptions and needless complexities. The famous Occam’s Razor falls under this standard, with the enjoinder to not multiply entities beyond necessity. For example, explaining the phenomenon of night terrors in physiological terms as opposed to invoking demons or witches is simpler and hence better. As another example, those who favor evolution over creation contend that the theory of evolution explains everything that the creation explanation explains but has the advantage of not postulating God. As a third example, faking the moon landing in the 1960s would have required far more advanced technology than was available at the time as well as a global conspiracy between competing nations. The simpler explanation is that the landings took place. As the examples illustrate, explanations compete in terms of simplicity: all other things being equal, the simpler explanation is better.
Explanations can become more complicated as they deal with problems or objections. This need not be a fatal problem if the increased complication is warranted. In other cases, the increased complexity is ad hoc and serves primarily to try to save the explanation from criticism in an unprincipled way. This typically involves presenting more explanations to account for the problems that arise for the original explanation. For example, when experiments show that the earth is not flat, flat-earthers try to explain these failures by using some new factor(s) such as a previously unknown type of energy that affects gyroscopes. When challenged, they can say that this is an accepted method in science: almost all explanations are modified as complications arise. The challenge, then, is sorting out what is a legitimate modification in the face of a complication and what is an ad hoc attempt to save the explanation by bringing in new entities or complexities. This leads to what might be the most important standard, that of testability.
If an explanation gets it right, then it should yield predictions that turn out to be true. These predictions need to be testable, otherwise there is no way to know whether the explanation is correct. As such, if an explanation produces predictions that cannot be tested, then that is a problem for establishing its correctness: it might be correct, but we cannot know. If an explanation yields predictions that are tested and turn out to be false, then that is a problem for the theory—but this need not be fatal. As noted above, an explanation can be modified in the face of failure to account for that failure. This should yield a new prediction that can be tested. If the prediction turns out to be true, that is a plus for the explanation. As would be suspected, explanations compete in terms of explanatory power: all other things being equal, the explanation that yields better predictions is better. If the new prediction turns out to be false, then the explanation can be modified again to yield another prediction for testing. For example, if a new type of energy is postulated to explain how gyroscopes work, then predictions need to be made and tested for this energy. If it is claimed that the prediction is that gyroscopes would work the way they do and thus the energy has been shown to be real, then this would seem to be reasoning in a circle. As would be suspected, this is where conspiracy theories often hit the rocks: they advance explanations that yield false predictions and then modify the explanations, which then yield false predictions. They then modify then explanations, which yield more false predictions and so on. The problem does not lie with the basic method: as noted above, modifying explanations in a principled way in the face of findings is a legitimate method. The problem is that the proponent of the explanation simply refuses to accept testability—nothing can refute their explanation because they will simply modify it to respond to every failure.
It might be objected that such persistence is a good thing—that if the great thinkers of the past gave up at the first failure of their predictions, we would not be where we are today. While there is much to be said of persistence, there is a point at which the proponent is simply refusing to accept testability—nothing can ever refute their explanation. But this makes the theory meaningless as it becomes useless as an explanation. To use a silly analogy, consider the invisible unicorn.
When I was in grade school, a kid told us they had a unicorn. Being kids, we doubted this but really wanted to see it. The unicorn kid claimed that we could not see it because it was invisible. A smart kid pointed out that we should be able to hear it. But unicorn kid said their unicorn was silent. Then someone said that we should be able to touch it or see its prints on the ground. To which unicorn kid said it was too quick and was flying. And so on, for every test that would prove (or disprove) the unicorn. While we might not be able to draw an exact line at which an explanation starts becoming an invisible unicorn, once it reaches that zone the game is over. As would be guessed, conspiracy theories often end up in the land of invisible unicorns.