The question of whether some philosophical ideas are too harmful to even be proposed was raised in a philosophy teaching group on Facebook. This essay is the result of that question, though this could easily be the subject of an entire book. As such, this is more of a quick ramble of thoughts about the matter than a complete theory of harmful ideas.
When addressing this question, a good starting place is sorting out the matter of harm: determining who would be harmed and the nature of the harm. From the perspective of those (who see themselves as) harmed, the answer is likely to be “yes.” But this leads to the matter of whether this (perceived) harm objectively warrants not proposing an idea—if such objectivity is possible.
The easy and obvious way to approach this morally is to use utilitarianism: if proposing a philosophical idea would generate more harm (negative value) than benefit (positive value) for the morally relevant beings, then it would be too harmful to propose. Ideas would need to be assessed on a case by case basis using a plausible account of value, a plausible system of weighing these values, and a plausible account of who is morally relevant. As would be expected, different people can rationally and reasonably come up with vastly different assessments—not to mention all the irrational and unreasonable assessments about dangerous ideas. The obvious counters to this utilitarian approach would be arguments in favor of other moral systems, such as a deontological theory of ethics.
The discussion of harmful ideas would also require setting some guidelines about the sort of ideas and harms that should be given serious consideration. For example, it is easy to make up a horror story case of a philosophical idea such that understanding the idea would lead to madness, catatonia or even death. Fortunately, these fictional cases are easy to address: these ideas would obviously be too harmful to propose. The moral justification would be analogous to arguments one might use to show that handing out poisoned food to people would be wrong. While such ideas might be possible, this seems to be an utterly theoretical concern: fun for horror stories but as worrisome as the possibility of being mauled by werewolves. That said, there is the science-fiction case of Roko’s basilisk that some might use as an example of an idea that is too harmful to propose. But these seems to be no evidence for any meaningful harmed caused by this idea. As such, it would seem wise to focus on ideas that could cause actual harm.
Real philosophical ideas do cause harm. Obvious examples range from ideas that create mild discomfort in students to philosophical ideas that have been used to justify brutally oppressive governments. These ideas are already out in the wild but can be used as the starting point for discussions about general categories of new ideas. We should, of course, not make public an example of an idea we suspect might be too harmful to propose—at least if we have a working conscience.
One general category of ideas would be those that would cause psychological distress in those that hear them, perhaps because these ideas are about those people. To use a real example, a philosophical idea that gender is set by Platonic universals and is thus an objective feature of reality could cause some dismay and distress to some people—especially people for whom a choice of gender is critically important. To use another real example, the philosophical ideas used to argue for atheism (such as the problem of evil) can be very distressing to people of faith. This sort of matter would fall under existing concerns about ideas that cause similar distress—especially in the classroom. That is, these new ideas could be assessed in the context of how we already handle existing ideas.
Another general category of ideas would be those that could cause social, economic, or political harm if they were proposed and acted upon. As a real example, the philosophical underpinnings of fascism and racism (such as they are) have a role in the terrible harms done by these views. As another real example, those who possess great wealth and power would contend they have been harmed by the philosophical ideas underlying socialism, social justice, anarchism, and other views inimical to such concentrated wealth and power.
These ideas should be assessed in a manner to how one would assess a new technology: what harms could it generate directly and what are likely scenarios in which in can be misused? While people often overestimate and underestimate the harms and benefits, engaging in an assessment is still preferable to simply letting it loose in the wild and hoping for the best. We would also need to keep in mind the obvious: what is harmful to some can be beneficial to others. To illustrate with a sci-fi example, if a philosopher has an idea that would effectively undermine capitalism and create a Star Trek style world, then this would be perceived as extremely harmful to the ruling classes and their supporters, yet would seem to be objectively beneficial to humanity as a whole. The rulers would, one assumes, see this idea as too harmful to propose.
In closing, there can be ideas too harmful to propose—but we lack a well-developed account of such ideas. This would make an excellent subject for a book.