On the face of it, the notion of skill transference in education sounds reasonable: if a student learns one skill, such a Latin or geometry, that requires logical thinking, then this skill should transfer to other areas that involve logical thinking—such as categorical logic. But it turns out that these skills do not transfer. There have also been ill-fated attempts to find skills that would boost general intelligence—a good example being the view that learning to play an instrument or chess would have this effect. So far, this has not worked out as desired. While learning to play chess makes a person better at chess, it does not seem to boost general intelligence.
Because of its perceived value, there has been a concerted effort to teach students critical thinking. At my university this is one of the competencies we assess as part of our assessment of the General Education curriculum. These is, as would be imagined, an assumption that various and diverse general education classes can teach the general skill of critical thinking. My Philosophy and Religion program also has critical thinking as a competency we assess as part of our assessment and there is, once again, an assumption that there is a general skill being taught. Interestingly, the national data and the data from my university shows that students generally do not transfer critical thinking skills. What is extremely interesting is that these skills do not seem to transfer well even within a specific discipline. For example, one might think that taking Critical Inquiry (a critical thinking class) or Logic would confer general critical thinking skills that would be retained an applied in other philosophy classes. But this is generally not the case.
While it is not surprising that very specific skills would not transfer well (for example, learning about metaphysics might not help a student much in ethics) it does seem odd that the supposedly general critical skills do not transfer very well. Daniel Willingham provides an excellent analysis of this problem.
Willingham presents two excellent examples. One involves the difficultly people have with transferring an understanding of the law of large numbers in the context of randomness (such as dice) to cases such as judging academic performance. That is, a person who gets that rolling a set of dice twice will not tell you whether they are loaded or not might uncritically accept that a person who gets two bad math exam grades must be bad at math. These involve the same sort of reasoning: inductive generalizations; but the skill does not seem to transfer between the different applications. If it did, a person who understood the dice situation would also get that a sample of two math tests is too small to be an adequate sample.
His second example, a classic experiment, involved analogical reasoning. In this example, subjects were asked how a tumor could be treated with a ray that would cause a lot of collateral damage. Before being given this problem, the subjects read a story about rebels attacking a fortress that offered a strong analogy to the tumor situation. Despite having the solution right in front of them, the subjects could not solve the medical problem. The researchers found that telling the subjects that the story might help solve the problem resulted in almost all the subjects being able to apply the analogy. The researchers concluded that the problem was getting the subjects to use the analogy—the analogy itself was easy to use.
Willingham draws the conclusion that, “The problem is that previous critical thinking successes seem encapsulated in memory. We know that a student has understood an idea like the law of large numbers. But understanding it offers no guarantee that the student will recognize new situations in which that idea will be useful.” So how could this connect to the ability of people to hold to inconsistent beliefs?
As noted in my previous essays on inconsistent beliefs, people are good at believing claims that are inconsistent with each other. Two claims are inconsistent when they both cannot be true but they both could be false. This is different from two claims being contradictory: if one claims contradicts another, one must be true and the other false. As also noted in previous essays, my inspiration for this series of essays was seeing social media posts by Trump supporters presenting and professing belief in inconsistent (and sometimes contradictory claims). To illustrate, Trump supporters tended to profess belief in Trump’s claims that the virus was no worse than the flu (and that it was a hoax). When Bob Woodward released tapes proving that Trump acknowledged the danger of the virus in February, many Trump supporters accepted Trump’s claim that he wanted to play down the virus to avoid a panic. His supporters begin posting in his defense, asserting that great leaders lie in this way to keep morale up in the face of terrible danger (something Plato might accept, given his noble lie). They also claimed he was right to do this—to prevent panic in the face of a deadly virus. Laying aside all the moral issues here, there is an obvious logical problem: if Trump was right to lie to play down the virus because it is a terrible danger, then this is inconsistent with the claim that it is like the flu (or a hoax). So, if he had to lie because of the danger, then it is not like the flu (or a hoax). But if it is like the flu (or a hoax) then he did not need to lie about the danger. There is a certain unpleasant fun to be had in getting a Trump supporter to profess belief in these inconsistent claims in the space of a short Facebook interaction; but almost anyone can easily be caught in holding inconsistent beliefs. The transference problem can help explain some of this.
As Willingham has shown, people are generally bad at transferring their critical thinking skills between different situations. Differences in content, as he noted, can prevent people from seeing what can be made obvious with the right context. Because of this, a person might be very good at discerning inconsistency in specific cases but fail completely in other cases. As an example, consider a Trump supporter who is very good at finding inconsistencies in claims made by liberals they disagree with. They are motivated to find such problems and continued practice can make them quite good at finding inconsistencies in very specific contexts. But if the context is switched to their own beliefs, the change can suffice to prevent the transference of this skill. That is, they can readily see the inconsistencies of a liberal in a specific context but are literally unable to see their own inconsistencies. This is analogous to the subjects in the analogy experiment: they had the answer right in front of them but were blind to it until it was pointed out to them.
Put in general terms, people with strong political views can get considerable practice attacking and criticizing views they disagree with—so they can develop critical thinking skills they can apply in very specific contexts. But people rarely subject their own beliefs to intense logical scrutiny. People almost never carefully compare their core beliefs to check for logical inconsistencies—so they have little practice or experience doing so. Hence, they will tend to be very bad at noticing what should be blindingly obvious inconsistencies. This, of course, assumes that people are being honest—they hold to the beliefs they are professing and are not lying as a strategy. It is to this that I will turn in my next essay.
Leave a Reply