The question of whether some philosophical ideas are too harmful to even be proposed was raised in a philosophy teaching group on Facebook. This essay is the result of that question, though this could easily be the subject of an entire book. As such, this is more of a quick ramble of thoughts about the matter than a complete theory of harmful ideas.
When addressing this question, a good starting place is sorting out the matter of harm: determining who would be harmed and the nature of the harm. From the perspective of those (who see themselves as) harmed, the answer is likely to be “yes.” But this leads to the matter of whether this (perceived) harm objectively warrants not proposing an idea—if such objectivity is possible.
The easy and obvious way to approach this morally is to use utilitarianism: if proposing a philosophical idea would generate more harm (negative value) than benefit (positive value) for the morally relevant beings, then it would be too harmful to propose. Ideas would need to be assessed on a case by case basis using a plausible account of value, a plausible system of weighing these values, and a plausible account of who is morally relevant. As would be expected, different people can rationally and reasonably come up with vastly different assessments—not to mention all the irrational and unreasonable assessments about dangerous ideas. The obvious counters to this utilitarian approach would be arguments in favor of other moral systems, such as a deontological theory of ethics.
The discussion of harmful ideas would also require setting some guidelines about the sort of ideas and harms that should be given serious consideration. For example, it is easy to make up a horror story case of a philosophical idea such that understanding the idea would lead to madness, catatonia or even death. Fortunately, these fictional cases are easy to address: these ideas would obviously be too harmful to propose. The moral justification would be analogous to arguments one might use to show that handing out poisoned food to people would be wrong. While such ideas might be possible, this seems to be an utterly theoretical concern: fun for horror stories but as worrisome as the possibility of being mauled by werewolves. That said, there is the science-fiction case of Roko’s basilisk that some might use as an example of an idea that is too harmful to propose. But these seems to be no evidence for any meaningful harmed caused by this idea. As such, it would seem wise to focus on ideas that could cause actual harm.
Real philosophical ideas do cause harm. Obvious examples range from ideas that create mild discomfort in students to philosophical ideas that have been used to justify brutally oppressive governments. These ideas are already out in the wild but can be used as the starting point for discussions about general categories of new ideas. We should, of course, not make public an example of an idea we suspect might be too harmful to propose—at least if we have a working conscience.
One general category of ideas would be those that would cause psychological distress in those that hear them, perhaps because these ideas are about those people. To use a real example, a philosophical idea that gender is set by Platonic universals and is thus an objective feature of reality could cause some dismay and distress to some people—especially people for whom a choice of gender is critically important. To use another real example, the philosophical ideas used to argue for atheism (such as the problem of evil) can be very distressing to people of faith. This sort of matter would fall under existing concerns about ideas that cause similar distress—especially in the classroom. That is, these new ideas could be assessed in the context of how we already handle existing ideas.
Another general category of ideas would be those that could cause social, economic, or political harm if they were proposed and acted upon. As a real example, the philosophical underpinnings of fascism and racism (such as they are) have a role in the terrible harms done by these views. As another real example, those who possess great wealth and power would contend they have been harmed by the philosophical ideas underlying socialism, social justice, anarchism, and other views inimical to such concentrated wealth and power.
These ideas should be assessed in a manner to how one would assess a new technology: what harms could it generate directly and what are likely scenarios in which in can be misused? While people often overestimate and underestimate the harms and benefits, engaging in an assessment is still preferable to simply letting it loose in the wild and hoping for the best. We would also need to keep in mind the obvious: what is harmful to some can be beneficial to others. To illustrate with a sci-fi example, if a philosopher has an idea that would effectively undermine capitalism and create a Star Trek style world, then this would be perceived as extremely harmful to the ruling classes and their supporters, yet would seem to be objectively beneficial to humanity as a whole. The rulers would, one assumes, see this idea as too harmful to propose.
In closing, there can be ideas too harmful to propose—but we lack a well-developed account of such ideas. This would make an excellent subject for a book.
This essay is good, but doesn’t address the question of who gets to decide which ideas are harmful. What do we do, for example, when two groups of people have opposing views on God, gender or whatever, and both consider the opposite view to be harmful? How does society decide?
Most people agree that the best societies are democracies, where everyone gets a say and an informed choice in the laws and rules under which they live. But how could they have their say, how can their choice be informed, if they cannot hear both sides of every argument?
Further, how should society react when one self-selected group declares itself to be the arbiter of truth and decides that only its views on a given subject may be spoken? And what if that group gains power, and decides that any opposing views should be punished?
The question is not theoretical. The scenario is called fascism. We saw this in the Soviet Union, in Stasi Germany, and we see it today from many authoritarian regimes, including China and N Korea, where dissenters are imprisoned, tortured, or, if they’re lucky, re-educated.
And, of course, there was Nazi Germany, driven by an anti-Semitism we can all agree was harmful, but enabled by a culture that ensured all dissenting views were punished. History shows that every evil regime does this, outlawing ideas it considers harmful to its cause.
Talk about ‘which philosophical ideas are too harmful to propose’ may seem harmless enough, but the authoritarian societies it leads to are far from it. Perhaps then, it is itself an idea that should be banned? Or is freedom to discuss all ideas a good thing?
I myself do not wish to abandon the hard won freedoms of the Enlightenment and of Liberal democratic society. It may well provide short term gains, in terms of silencing political views that we personally oppose, but in doing so, you risk introducing an authoritarianism that brings individual harm and social injustice on a mass scale.
We need to cherish and protect our open society, our freedom of speech, our ability to say what we believe to be true, without fear of punishment. These are not freedoms to be carelessly given away undemocratically by a new authoritarian academia.
A truth that is harmful to others may be helpful to me. And when a society uses lies to “keep the peace”, the peace will inevitably fail as the truth always leaks out, and those truths will be interpreted “injustice” or “hate” or “oppression”. Better to build your society on truth. Otherwise you get 1984-style totalitarianism. Which it seems may not be that far away.
Well, truth is a sword and cares not who it cuts.