In 1985 Officer Julius Shulte responded to a missing child report placed by the then girlfriend of Vernon Madison. Madison snuck up on the officer and murdered him by shooting him in the back of the head. Madison was found guilty and sentenced to death.

As the wheels of justice slowly turned, Madison aged and developed dementia. He was scheduled to be executed in January 2018 but the execution was delayed and the Supreme Court heard his case. The defense’s argument was that Madison’s dementia prevents him from remembering the crime and his execution would violate the constitutional ban on cruel and unusual punishment. The prosecution seemed to agree that Madison could not recall the crime but argued he should be executed because he can understand that he will be put to death for being convicted of murder. In a 5-3 opinion, the Court held that the Eighth Amendment may permit executing a prisoner even they cannot remember committing their crime, but it may prohibit executing someone suffering from dementia or another disorder, rather than psychotic delusions. The Court also held that if a prisoner is unable to rationally understand the reasons for their sentence, the Eighth Amendment forbids their execution. While the legal issue has been settled (for now), there still remains philosophical questions.

While metaphysics might seem far removed from the courts, as John Locke noted, “in this personal identity is founded all the right and justice of reward and punishment…” The reason for this is obvious: it is only just to punish (or reward) the person who committed the misdeed (or laudable deed). Locke is talking about metaphysical personal identity: what it is to be a person and what it is to be the same person across time. As such, he is using the term technically and not in the casual sense in which terms like “person” and “man” or “woman” are used interchangeably.

In the normal pursuit of legal justice, the practical goal is to find the right person and there are no worries about the metaphysics of personal identity. But in unusual circumstances, the question can arise as to whether what seems to be the same person really is the same person. For example, one might wonder whether a person with severe dementia is the same metaphysical person who committed the long ago crime.  Appropriately enough, John Locke addressed this problem in considerable detail.

In discussing personal identity, Locke notes that being the same man (or woman) is not identical with being the same person. For him, being the same man is a matter of biological identity: it is the same life of the body through which flows a river of matter over the years. Being the same person is having the same consciousness. Locke seems to take consciousness to be awareness and memory. In any case, he hinges identity on memory such that if memory is irretrievably lost, then the identity is broken. For example, if I lose the memory of running a 5K back in 1985, then I would not be the same person as the person who ran that 5K. I am certainly a slower person, even if I am the same person. If a loss of memory does entail a loss of personal identity, then perhaps a “memory defense” could be used: a person who cannot remember a crime is not the person who committed the crime.

Locke does consider the use of the memory defense in court and addresses this challenge with practical epistemology. If the court can establish that the same man (biological identity) but the defendant cannot establish that they have permanently lost the memory of the misdeed, then the matter will be “proved against them” and they should be found guilty. Locke does remark that in the afterlife, God will know the fact of the matter and punish (or reward) appropriately. However, if it can be established that the person does not remember what the man (or woman) did, then they would not be the same person as that man (or woman). For Locke punishing a different person for what the same man did would be unjust.

While there is the practical matter of knowing whether a person has forgotten, this seemed to have been established in the Madison case. While people can lie about their memory, dementia seems impossible to fake, as there are objective medical tests for the condition. As such, concerns about deception can be set aside and the question remains as to whether the person who committed the crime is still present to be executed. On Locke’s theory he would not—the memories that would forge the chain of identity have been devoured by the demons of dementia.

There are, of course, many other theories of personal identity to choose from. For example, one could go with the view that the same soul makes the same person. One must simply find a way to identify souls to make this work. There are other options to pull from the long history of philosophy. It is also worth considering various justifications for punishment in this context.

Punishment is typically justified in terms of rehabilitation, retribution, and deterrence. While rehabilitation might be possible in the afterlife, execution cannot rehabilitate a person for the obvious reason that it kills them. While the deterrence value of execution has failed to deter the person to be executed, it could be argued that it will deter others—which is a matter of debate. It could be argued that executing a person with dementia will have deterrent value. In fact, it could be contended that showing that the state is willing to kill even people with dementia would make the state even more terrifying. For the deterrence justification, the metaphysical identity of the person does not seem to matter. What matters is that the punishment would deter others, which is essentially a utilitarian argument.

The retribution justification takes us back to personal identity: retribution is only just if it is retribution against the person who committed the crime. It could be argued that retribution only requires retribution against the same man (or woman) because matters of metaphysics are too fuzzy for such important matters. One could also use the retribution justification by advancing another theory of personal identity. For example, at one point David Hume argues that a person is a bundle of perceptions united by a causal chain (rather like how a nation has its identity). On his view, memory discovers identity but (unlike for Locke) it is not the basis of identity. Hume explicitly makes the point that a person can forget and still be the same person; so, Madison could still be the same person who committed the crime on Hume’s account. However, Hume closes his discussion on personal identity in frustration: he notes that the connections can become so tenuous and frayed that one cannot really say if it is the same person or not. This would seem to apply in cases of dementia and hence Madison might not be the same person, even in Hume’s view.

This view could be countered by arguing that it is the same person regardless of the deterioration of mental states. One approach, as noted above, is to go with the soul as the basis of personal identity or make an intuition argument by asking “who else could it be but him?” One could, of course, also take the pragmatic approach and set aside worries of identity and just embrace what the court decided. Vernon Madison was not executed but died on February 22, 2020.

The Declaration of Independence asserts a variation of Locke’s political philosophy, claiming that all men are created equal and have  the natural rights to life, liberty and the pursuit of happiness. Locke said there is a right to property rather than a right to the pursuit of happiness.  As one of my political science professors noted, the founders had most of the property and did not want other people to get ideas.

If this document is taken seriously as a statement of American political philosophy and values, it commits all Americans to the equality of people and to these three basic rights of life, liberty and the pursuit of happiness. While the notion of equality and the specifics of these rights are subject to debate and disagreement, their interpretation cannot stray too far, or they become meaningless or absurd. For example, when South Carolina seceded from the Union the authors appealed to the principle of liberty as a justification for maintaining slavery. Asserting that the natural right of liberty justifies rebellion to maintain the violation of the natural right of liberty is clearly an absurd position, but no more absurd than positions taken on rights today.

While slavery is currently illegal (with a few exceptions) there are still violations of the principle of equality and these natural rights. As might be suspected, minorities are often the targets of such violations.  Skeptics often say they see no evidence of systematic violations in their own experiences and then claim is no such thing. If examples are offered, the response is usually that these examples are anecdotal evidence or that the alleged violations are not real violations, but consequences brought about by the individuals in question. That is, that they must have done something wrong that justified what was done to them.

These replies do have some appeal. After all, an appeal to anecdotal evidence to establish a general claim would be a fallacy.  There can also be cases in which apparent violations are instead self-inflicted harm. Responding to the charge of anecdotal evidence requires the presenting of statistical data in support of the claim that such violations exist. Responding to the assertion that the apparent violations are the fault of the alleged victims requires showing that the harms are inflicted rather than self-inflicted.

The statistical evidence for inequality is overwhelming, with blacks and Hispanics in the United States consistently being worse off than white Americans. The disparity begins at birth,  as infant mortality for blacks is more than double that of whites. It ends, one assumes, at death. While the life expectancy of Americans has been declining, black Americans have a lower average life expectancy than white Americans. It should be noted that “deaths of despair” have increased among middle-aged whites as they have been facing conditions routinely endured by blacks and Hispanics (notably a shortage of steady, well-paying jobs). While this might be seen as evidence against the existence of racism (that social ills are increasingly killing whites, too) it serves more to highlight the impact of economic disparity that has always been present. That class disparity is “equalizing” the harms of racism is obviously not a good thing.

Between birth and death, blacks and Hispanics are far more likely to grow up in poverty, less likely to graduate from high school, less likely to be enrolled in college, more likely to earn less money, more likely to lack insurance, and far less likely to own rather than rent. This is not to deny that there are whites who are in dire straits nor is it to ignore anecdotes about the misfortunes of whites. However, this is a matter of statistics and in general blacks and Hispanics are worse-off than whites. While this establishes the statistical evidence, there remains the question of causation.

The racist explanation is that whites are generally superior to blacks and Hispanics and hence do better at life. This view of racial superiority and inferiority is, by definition, racist. However, being morally repugnant does not make something false. Being untrue makes it false.

If there were different races with different abilities, this would show up in genetic testing. However, the scientific evidence is that there is no biological foundation to the categories of race. It could be argued that the differences are undetectable by current science or, perhaps, that they are metaphysical in nature. The obvious problem with such claims is that they are based on a fallacious  appeal to ignorance and the burden of proof rests on those who claim they know there is a difference. As such, the biological superiority argument fails.

Another stock explanation is cultural: white culture is superior to black and Hispanic culture, so whites do better. This avoids the appeal to biological race and instead attributes negative traits (like laziness or criminality) to the cultures. One point of concern with this approach is defining cultures. After all, Americans share a broad culture and those who embrace the allegedly successful culture should tend to succeed at the same rate as whites. After all, anyone can adopt a culture (or appropriate it) and thus succeed. If it were that simple, presumably inequality would have ended long ago. Even if the cultural hypothesis is accepted, there arises the question as to why such cultures exist and have the alleged traits.

Given the historical facts of slavery and racism, the most plausible explanation is that blacks and Hispanics inherit many of the residual the harms of the past centuries while the white population, in general, inherits the benefits. While there are some remarkable rags-to-riches stories,  the United States has low economic mobility and even this has been on the decline. As such, it is no wonder that people whose ancestors were slaves in the United States would still be doing worse than those who owned slaves. After all, wealth provides an enduring advantage, and poverty provides an enduring disadvantage.

Some make the argument that since slavery ended over a century ago, its effects cannot possibly be felt. While this is an absurd claim (think of the old money families who owe their wealth to things that happened long ago), one need not rely on an appeal to the impact of the past. One can simply run through examples of and data about contemporary racism.

Those that disagree with this claim will, of course, endeavor to claim that the examples are isolated incidents and that the statistics are either in error or lies. The challenge is, of course, to respond to the data with opposing data of equal or greater credibility. The other main alternative, as noted above, is to persist in arguing that while the harms are real, they are self-inflicted. While people are sometimes their own worst enemy, the evidence is solid that many of the harms of inequality are inflicted. These, in turn, impact the liberty and life of those affected—which runs against the spirit of the Declaration of Independence. But these sorts of fact-based arguments are generally ineffective as such beliefs are based on values rather than logic. That is, it is not that racists are racists because they have false factual beliefs about statistical data. They are racists because of their values.

It is common practice to sequence infants to test for various conditions. From a moral standpoint, it seems obvious that these tests should be applied and expanded as rapidly as cost and technology permit (if the tests are useful, of course). The main argument is utilitarian: these tests can find dangerous, even lethal conditions that might not be otherwise noticed until it is too late. Even when such conditions cannot be cured, they can often be mitigated. As such, there would seem to be no room for debate on this matter. But, of course, there is.

One concern is the limited availability of medical services. Once an infant is sequenced, parents will need experts to interpret the results. If sequencing is expanded, this will involve dividing limited resources, which will create the usual problems. While the obvious solution is to train more people to interpret results, this faces the usual problems of expanding the number of available medical experts. Another resource problem will arise when problems are found. Parents who have the means will want to address the issues the tests expose, but not everyone has the resources. Also of concern is the fact that conditions that can be found by sequencing can manifest at different times: some will become problems early in life, others manifest later. This raises the problem of distributing access to the limited number of specialists so that infants with immediate needs get priority access.

One obvious reply to the concerns about access is that this is not a special problem for infant sequencing; it runs broadly across health care. And, of course, there is already a “solution”: the rich and connected get priority access to care. The same “solution” will presumably also be applied in the case of sequencing infants.

Another sensible reply to these concerns is that these are not problems with sequencing, but problems with the medical system. That is, shortages of medical experts and difficulty in accessing the system based on need. Sequencing infants will put more burden on the system and this does raise the moral question of whether the burden will be worth the return. On the face of it, of course, improving medical care for infants would seem to be worth it.

A second concern about sequencing is that, like other medical tests, it might end up doing more harm than good. On the face of it, this might seem an absurd thing to claim: how could a medical test do more harm than good? After all, knowing about potential health threats ahead of time is analogous to soldiers knowing of an upcoming ambush, or a community knowing about an incoming storm before it arrives. In all these cases, foreknowledge is good because it allows people to prepare and makes it more likely that they will succeed. As such, sequencing is the right thing to do.

While this view of foreknowledge is plausible, medical tests are not an unmitigated good. After all, medical tests can create anxiety and distress that create more harm than the good they do. There is also an established history of medical tests that are wasteful and, worse, those that end up causing significant medical harm. Because of the potential for such harms, it would be unethical to simply rush to expand sequencing. Instead, the accuracy and usefulness of the tests need to be  determined.

It might be countered, with great emotion, that if even a single child is saved by rapidly expanding sequencing, then it would be worth it. The rational reply is, of course, that it would not be worth it if expanding the sequencing too quickly ended up hurting many children. As such, the right thing to do is to address the possible risks rationally and avoid getting led astray by fear and hope.

 

David Hume is credited with raising what is now known as the problem of induction. As Hume noted, the contrary of any matter of fact is logically possible. To illustrate, it is not a contradiction to claim that although the earth is now rotating around the sun, this will not hold true tomorrow. This is in contrast with what he called the truths of reason, as it is a contradiction to deny them. For example, to deny that a triangle has three sides is to assert that a three-sided figure does not have three sides.

In considering our reasoning about matters of fact, Hume notes that we try to justify our beliefs by appealing to other beliefs about causal laws. That is, people tend to think that there is a causal order set in the laws of nature that ensures a consistent universe. For Hume, an empiricist, this process is based on experience. As he sees it, people observe similarities between events and then form the expectation that the same things will occur in unobserved cases (such as those occurring in the future).  While most of us have faith in causality based on our experience, Hume contends that the reasoning from the observed cases to the unobserved cases is unwarranted. The gist of his argument focuses on the idea that the future will be like the past, which is essential to engaging in inductive reasoning about the future. This sort of reasoning takes the form of inferring that because X happened in situation Y in the past, X will happen in situation Y in the future. For example, people think the earth will still be revolving around the sun tomorrow because it has done so in the past. The challenge is showing that this reasoning is warranted. Hume claims this cannot be done.

As Hume argued, the argument that because X has happened in the past, X will happen in the future is not a sound deductive argument. This is because it could be true that X has happened in the past, while the conclusion could still be false. A sound deductive argument must, of course, be valid (such that if all the premises are true, then the conclusion must be true) and have all true premises. This is by definition.

If one attempts to justify inductive logic by using an inductive argument, this will beg the question. To justify induction by induction, inductive logic would already need to be justified. As such, neither a deductive nor inductive argument can justify induction and so we get the problem of induction. In practical terms, the problem is that since an inductive argument always involves a leap from what has been observed to what has not been observed, even if all the premises are true and the reasoning is strong, the conclusion could still be false.

Like many other philosophical problems, the problem of induction initially seems silly and trivial. It seems silly because, as Hume noted, only a fool or a mad person would deny faith in induction. For example, someone who insisted that while fire is hot today it might be cold tomorrow would be regarded as deranged. It seems trivial because, like the problem of the external world, it seems to have no real-world implications. However, it is neither silly nor trivial.

The easy way to argue for this is to point out that the problem of induction has serious practical consequences. Inductive reasoning is used in all aspects of life and the consequences of not keeping this problem in mind range from the embarrassing to the disastrous. For example, most of the inductive generalizations (surveys and polls) predicted that Clinton would win in 2016. While many were shocked when these polls “got it wrong”, this was one more example of the problem of induction: no matter how careful the evidence is gathered and how skillfully the argument is crafted, the conclusion can always be false. As another example, a person might be confident that they will safely arrive at their destination and end up dying in a plane crash. After all, that inference is also inductive. More broadly, the problem infects all inductive reasoning ranging from simple analogies to large scale scientific experiments. As such, it is only fools and lunatics who do not worry about the problem of induction and consider that no matter how careful they are in their reasoning, they could still get things wrong.

At this point, it might be claimed that although this practical aspect of the problem of induction is a meaningful problem, the philosophical variation is still trivial and silly. To be more specific, the notion that our faith in basic aspects of reality is unfounded is a silly idea. For example, to say that while gravity, fire and electromagnetism work in certain ways now, they might not work the same tomorrow would be absurd. Gravity will always work as it does, fire will always burn and so on. Even those who accept inductive arguments can always fail tend to have faith in a consistent and reliable reality.  However, as Hume argued, this faith is unwarranted.

As noted above, the idea that induction can fail in everyday cases is reasonable. For example, it is clearly not absurd to consider that while someone loves you today, they might stop loving you someday. As another example, it is not silly to think that while you have never been allergic to bee stings in the past, you might become allergic to them. In such cases, our faith is not absolute, and we accept the possibility of error. But, in the case of things like fire and gravity, our faith tends to be absolute. A seemingly faithful spouse might betray their partner, but fire will always burn.  But, of course, our faith reflects our feelings and not reality and we simply feel strongly, but we do not know, that fire will always burn and so on for the other matters of our faith in the workings of the world. If we set aside our faith and consider the matter in terms of inductive reasoning, then we would realize that our confidence that the future will be like the past is not well founded. We could be wrong, though we certainly feel otherwise. After all, the same inductive logic that is used for brand buying (“my previous Asics shoes were good, so the next pair I buy will also be good”) is also used for predicting that future fire will be like past fire. The main difference thus cannot be in the logic; it lies in how we feel. Because of this, what is needed is not another logical argument about the problem, but a way to sway intuitions. This is a common approach in the case of big and weird philosophical problems, such as the problem of the external world.

The problem of the external world, which was most famously developed by Descartes in his Meditations, is the problem of proving that the world I think I am experiencing is really real for real. Like most philosophy professors, for years I found it challenging to motivate students to see the problem as a real problem. After all, thinking that the world is not real seems like insanity. Then the Matrix came out and getting people to accept the problem became easy. Fortunately, shows like Black Mirror provide fresher examples. Unfortunately, there has yet to be a big movie or show that includes the problem of induction as a central theme. However, I can use video games to illustrate this problem

Imagine, if you will, that you are a character in a video game like Destiny 2, World of Warcraft or Warframe. From your perspective, the world has rules, and things work in the same way. At least until they do not. After all, a game world is under the control of the programmers, and they can change the reality at will. Think of what the inhabitants of such game worlds would think if they were aware and could remember what had come before. For example, the developers of Destiny 2 accidentally released a bugged weapon, the Prometheus Lens, into the game.  Because of the bug, the weapon could kill a character in player versus player battles almost instantly making it insanely overpowered and broken. Bungie then patched the weapon (“nerfing” it, in gamer slang) so that it would perform properly. From the standpoint of the game world inhabitants, the weapon suddenly and inexplicably went from a fiery engine of instant death to an average gun. Game worlds can also experience far more radical alterations: entire sections of mechanics can change with a patch or update. Players, of course, know that the changes are made in the code by programmers. But, from the perspective of the hypothetical game world inhabitants, reality suddenly changes without any warning or explanation.

Now imagine that we live in a world subject to the alterations of a creator—we could suddenly find that our game has been patched or updated and that there are radical differences between yesterday and today. To say that we have not seen such changes in the past would miss the point—after all, the last patch or update could have been long before our time or perhaps this will be the first update or patch. We have no way of knowing whether this is impossible or not—which is, of course, the problem of induction.

 

By JStark1809 / Deterrence Dispensed

In 2013 Defense Distributed created a working pistol using a $8,000 3D printer. This raised the specter of people printing guns and created quite a stir. The company made the news again in 2018 when Cody Wilson, an anarchist and owner of the company, was the subject of a lawsuit aimed at banning him from selling files for printing guns. As expected, this re-ignited the moral panic of 2013. Most recently, it is alleged that UnitedHealthcare CEO Brian Thompson was killed with a printed pistol and silencer.

While the idea of criminals, terrorists and others printing their own guns might seem alarming, it is important to consider the facts. As has often been pointed out, the 3D printer needed to make a functioning gun costs about $5,000 on the low end. By comparison, an AR-15 costs between $800 and $1200, while decent 9mm pistols are in the $400-700 range.  As such, 3D printing a gun does not make much financial sense unless a person is making guns in bulk. If a person wants a gun, they can easily buy several good guns for less than the cost of the printer. 

A second important point is that the most basic printed gun is not much of a gun: it is a single shot, low caliber weapon. While it could hurt or kill a person, it would be almost useless for someone intending to engage in a mass shooting and probably not very useful in most criminal endeavors. A criminal or terrorist would be foolish to choose such a weapon over a normal gun. While better guns can be printed, as the shooting of Thompson seems to illustrate, they are not as good as a manufactured firearm.

One reasonable reply to this view is to note that there are people who cannot legally own guns but who can own a 3D printer. These people, the argument goes, could print guns to commit their misdeeds. The easy and obvious reply is that a person willing to break the law to illegally possess a printed gun (and use it in crimes) can easily acquire a manufactured gun for less than the cost of the printer.

It can be countered that there are, for whatever reason, people who want an illegal gun but are unable or unwilling to buy a manufactured gun illegally. For them, the printed gun would be their only option. But guns can be made using legal hardware readily available at a hardware store. This sort of improvised gun (often called a “zip gun”) is easy to make. Directions for these weapons are readily available on the internet and the parts are cheap. For those who cannot acquire bullets, there are evenplans to make pneumatic weapons. Printing a gun just automates the process of making a homemade gun at a relatively high cost. So, the moral panic over the printed gun is fundamentally misguided: it is just a technological variant of the worry that bad people will make guns at home. And the reality is that the more sensible worry is that bad people will just buy or steal manufactured guns.

While people do make their own guns, people prefer manufactured guns when engaging in crimes and terrorist attacks for obvious reasons. Thus, being worried about the threat posed by 3D printers and gun plans is like being worried about hardware stores and plans for zip guns. While people can use them to make weapons, people are more likely to use them for legitimate purposes and get their weapons some other way, such as buying or stealing them.

One could persist in arguing that the 3D printed gun could still be the only option for some terrorists. But I suspect they would forgo making homemade guns in favor of homemade bombs. After all, a homemade bomb is far more effective than a homemade gun for terrorism. As such, there seems to be little reason to be worried about people printing guns to commit crimes or make terrorists attacks. Manufactured guns and more destructive weapons are readily available to everyone in the United States, so bans on printing guns or their plans would not make us any safer in terms of crime and terrorism. That said, a concern does remain.

While printing a gun to bypass the law makes little sense, there is the reasonable concern that people will print guns to bypass metal detectors. While the stock printed gun uses a metal firing pin, it would be easy enough to get this through security. The rounds would, of course, pose a bit of challenge—although non-metallic casings and bullets can be made. With such a gun, a would-be assassin could get into a government building, or a would-be terrorist could get onto a plane. Or so one might think.

While this is a matter of concern, there are two points worth noting. First, as mentioned above, the stock printed gun is a single-shot low caliber weapon, which limits the damage a person can do with it. Second, while the gun is plastic, it is not invisible. It can be found by inspection and would show up on an X-ray or body scan. As such, the threat posed by such guns is low. There is also the fact that one does not need a 3D printer to make a gun that can get past a metal detector.  

While the printers available to most people cannot create high quality weapons, there is the concern that advances will allow the affordable production of effective firearms. For example, a low-cost home 3D printer that could produce a fully functional assault rifle or submachinegun would be a problem. Of course, the printer would still need to be a cheaper and easier option than just buying or stealing guns, which are incredibly easy in the United States.

As a final point of concern, there is also the matter of the ban on gun plans. Some have argued that to make the distribution of these plans illegal violates the First Amendment, which provides a legal right. There is also the moral right of free expression. In this case, like other cases, it is a matter of weighing the harms of the expression against the harm inflicted by restricting it. Given the above arguments, the threat presented by printable guns does not warrant the restriction of the freedom of expression. As such, outlawing such plans would be immoral.  To use an analogy, it would be like banning recipes for unhealthy foods and guides on how to make cigarettes when they are readily available for purchase everywhere in the United States.

 

 

 

Plato argued philosophers should be kings, based on the idea that ruling was best done by those with knowledge. While having a philosophy professor running the show might not be the best idea, it makes sense to think intelligence is an important trait for good leaders. After all, good leadership requires making good decisions and intelligence can help here.

As might be expected, there is evidence for this view: there is a strong correlation between perceived leadership effectiveness and intelligence. Interestingly, there is positive correlation up until the leader’s IQ is 120. Above that and the leader is perceived as less effective. There are, of course, questions about IQ as a measure of intelligence, but let us set that aside for this discussion.

It is tempting to embrace the stereotype of the bumbling and ineffective intellectual and think that these higher IQ leaders are bad at leading because of their intelligence. To use a fictional example, consider the Star Trek episode “The Galileo 7.” In this episode, Spock and several crewmembers from the Enterprise crash on a planet and are beset by hostile natives. In the course of the episode, Spock uses his logic and intelligence to make decisions—but fails as a leader until he takes a desperate gamble to save everyone. The same, one might argue, can happen in the real world: a leader whose intelligence leads them astray when they try to lead. To use a real-world example, Jimmy Carter was often characterized in this manner. He was an intelligent (and compassionate) person, but many claimed he was a poor leader because he overthought things.

While this explanation has some appeal, especially in a political and social climate that is savagely anti-intellectual and anti-expert, it does not hold up to scrutiny. While there are intelligent leaders who are bad at leading, high IQ leaders are generally perceived as performing worse than their actual performance. As such, the problem is more one of perception of leadership than leadership.

It could be objected that this perception problem is a problem of leadership because a good leader would ensure that their leadership was properly perceived. On the one hand, this objection does have appeal because a key part of leadership is getting people to follow and shaping perceptions is important. On the other hand, it could be argued that the fault lies in the followers and the responsibility of learning how to perceive reality accurately lies with them.

In many ways, this challenge is like that faced by educators. A very intelligent teacher presenting difficult material to students who do not understand might be perceived as a bad teacher because of the students’ own ignorance. In contrast, a less intelligent teacher presenting simple material might be seen as a very good teacher (especially if the students get good grades). In the education scenario, one could blame the students as they should put in more effort to understand and in doing so would realize that the teacher knows their stuff. Of course, one could also blame the teacher: their job is not to show off their intelligence before uncaring students, but to teach them. As such, a good teacher must develop the skills needed to win the attention of students and the ability to guide them from ignorance to knowledge. In the history of education, the pendulum of perceived responsibility tends to swing between these two points depending on the dominant educational theory (and politics) of the day.

One approach is to take the middle-ground and argue that both intelligent leaders and their followers need to improve. That is, the followers should learn to assess leadership better and the high IQ leaders need to develop ways to connect to their followers and present themselves in a way that is not perceived as ineffective. This might, perhaps, involve dumbing things down. Or, more charitably, improving their explanations.

Another approach is to put more of a burden on leaders or on the followers, which harkens back to the education analogy—the tendency is towards the extremes rather than the middle ground. This leads to interesting questions about the responsibilities of leaders and followers. Since the leader is in the position of authority and more should be expected of them, the leader is responsible for ensuring that the followers perceive their leadership effectiveness accurately. But, going back to the teaching analogy, it is unfair to put all the burden on a teacher for making students learn and likewise for leaders. As such, the middle-ground approach is perhaps the fairest: high IQ leaders, like high IQ teachers, need to ensure that they are understood. But, followers, like students, must also assume responsibility to try to understand.

 

The received wisdom is that when Americans buy vehicles, they consider gas mileage when gas prices are high and mostly ignore it when gas prices are low. As this is being written, gas prices are relatively low and gas mileage concerns are probably low on the list for most buyers. As such, it is not surprising that the Trump administration has decided to lower the fuel efficiency standards of the Biden administration. This is consistent with the Trump administration’s approach of trying to undo what Biden did, primarily because it was done by Biden. He had a similar approach to the Obama administration.

When the Trump administration did the same thing in his first term, they said the standards were “wrong” and were set as a matter of politics. One plausible economic reason to oppose fuel efficiency in cars and light trucks is that more efficient vehicles also cost more. This economic argument can be retooled into a moral argument: saving consumers money is the right thing to do. But there is also an economic argument in favor of greater fuel efficiency.

While gas prices can vary greatly, increased fuel efficiency will offset increased vehicle costs and result in the consumer saving money. As such, the long-term economic argument favors fuel efficiency. As before, this can be retooled into a moral argument that saving Americans money is a good thing. But consumers saving money would seem to mean lower profits for the fossil fuel industry.

If, for example, an efficient vehicle saves me $4,000 in fuel costs over its life, then that is $4,000 less for the fossil fuel industry. While few would shed tears over lost profits for the industry executives, the broader impact must also be considered. While the executives reap the most benefits, the fossil fuel industry also includes the people working at gas stations and in the production and distribution of the fuel. If the harm done to these people outweighs the good done for the consumers, then increased fuel efficiency would, on utilitarian grounds, seem to be wrong. But it seems unlikely that the savings to consumers would cause more harm than good. After all, if we compare the benefit of the average American saving money to the  harm of a microscopic loss of profit for fossil fuel CEOs, then efficiency seems to be the right choice. In addition to the economic concerns and the associated ethical worries, there are also concerns about health.

While the Trump administration does not seem to care about the harms of pollution, about 50,000 deaths each year result from the air pollution caused by traffic. There are also many non-lethal health impacts of this pollution, such as asthma.  Increased fuel efficiency means vehicles burn less fuel, thus reducing the air pollution they produce per mile. Because of this, increasing fuel efficiency will reduce deaths and illnesses caused by air pollution. This health argument can be retooled easily into a moral argument: increasing fuel efficiency reduces pollution deaths and illness, and, on utilitarian grounds, this would be morally good. But this argument only works with those who care about the lives and health of others. That is, it should work with people who profess to be pro-life. But it will not, for the usual and obvious reasons.

It is reasonable to ask about how significant the reduction in deaths and illness might be. Arguments can also be made to try to show that the reduction in pollution would not be significant enough to justify increasing fuel efficiency on these grounds. It also should be noted that we, as a people, tolerate roughly 40,000 vehicle deaths per year. As such, continuing to tolerate deaths from air pollution is also an option. Tolerating deaths and illness for convenience and economic reasons is as American as apple pie.

For those not swayed by health concerns, there are national security and economic arguments that have been advanced for increasing fuel efficiency and they can still be applied today. One argument is that increased fuel efficiency will reduce our dependence on foreign oil and make us safer. This security argument can also be presented as a moral argument based on the good consequences of increased security.

Another argument is based on the claim that buying foreign oil increases our trade deficit and this is economically harmful to the United States. Because of the negative consequences, this argument can also be refit as a moral argument in favor of increasing fuel efficiency. Given the Trump administration’s professed obsession with national security and trade deficits, these arguments should be appealing to them. But it is not.

Given the above arguments, there are excellent reasons to maintain the goal of increasing the fuel efficiency of cars and light trucks. While there are some reasons not to do so, such as helping the fuel industry increase profits, this would be the wrong choice.

 

I was asked to share a link to a post by another philosopher:

 

Written by Tracy Llanera, Associate Professor of Philosophy at the University of Connecticut.

Everything doesn’t happen for a reason

https://iai.tv/articles/everything-doesnt-happen-for-a-reason-auid-3073?_auid=2020

“Everything happens for a reason,” “it’s meant to be,” “it is what it is.” These cliches express an increasingly popular form of Stoic fatalism. The underlying idea is that “Reality” just is a certain way, determined by God or physics. This superficially tough realism comforts us by absolving us of responsibility: whatever happens was bound to happen. But this makes it dangerous, argues Tracy Llanera. It leads to resigned inaction in the face of geopolitical strife, injustice, and our personal lives. Instead, we must recognize that there is no higher being responsible for us: we must take responsibility for each other and the world we live in.

 

 

The United States has been waging a war on drugs and the drugs are winning: in 2016 63,000 people died from drug overdoses. The path from prescription opioids to heroin has resulted in over 15,000 deaths from heroin overdoses. The addition of fentanyl made things even worse.

Because of slowly shifting attitudes and the fact that the opioid epidemic hit white Americans and cut across our economic classes there was increased interest in treating addiction as a medical issue. This change is long overdue and could help provide some solutions to drug abuse.

One approach to reducing deaths has been safe injection facilities. A safe injection facility, as the name states, is a place where people can safely inject drugs under the supervision of people trained and equipped to deal with overdoses. These sites also provide clean needles, reducing the risk of infection and disease. Looked at from a legal viewpoint, these sites are problematic: they enable illegal activity, although the intention is to mitigate rather than contribute to the harms of drug abuse. While the legality of such facilities is a matter for law makers and judges, they also raise an ethical issue.

As with most large-scale social ills, a good starting point in the moral discussion is utilitarianism. This is the view that the morality of an action is determined by weighing the positive value it generates against the negative value for the morally relevant beings. An action that creates more positive value than negative value would be good; one that did the opposite would be evil. Bentham and Mill are two famous examples of utilitarians.

There are numerous positive benefits to injection clinics.  Because trained people are present to deal with overdoses, these facilities reduce overdose deaths. For example, there were 35% fewer fatal overdoses in the area around a Canadian injection facility after it opened. In contrast, other methods of addressing overdose saw only a 9.3% reduction in overdose deaths. While more statistical data is needed, this does point towards the effectiveness of the facilities. For folks who profess to be pro-life, supporting such facilities should be an easy and obvious choice.

Because the facilities provide clean needles, they reduce the occurrence of infections, and this saves the community money as sick drug addicts often end up being treated at public expense. Clean needles are much cheaper than emergency room visits.

If all the facilities did was provide needles and try to keep addicts from dying, then it would be reasonable to argue that they are just bailing out a sinking boat rather than plugging the holes. Fortunately, such facilities also refer their visitors to addiction treatment and some people manage to beat their addiction.

While significant statistical data is still needed, an analysis indicates that each dollar spent on injection facilities would save $2.33 in medical, law enforcement and other costs. From an economic and health standpoint, these are significant positive factors and help make a strong moral case for injection facilities. However, proper assessment of the matter requires considering the negative aspects as well.

One point of concern is that the money spent on injection facilities could be better spent in other ways directly aimed at reducing drug use. Or perhaps on other things, such as education or community infrastructure. This is a reasonable point, and a utilitarian must be open to the possibility that these alternatives would create more positive value. While this is mainly a matter of an assessment of worth, there are also empirical factors that can be objectively assessed, such as the financial return on the investment. Given the above, injection facilities do seem to be worth the cost; but this could be disproven.

Another point of concern is that although injection facilities refer people to treatment programs, they enable people to use drugs. It could be argued that this helps perpetuate their addiction. The easy and obvious reply is that people would still use drugs without such facilities; they would just be more likely to die, more likely to get sick, and less likely to enroll in addiction treatment programs. So, those who care about other people should support these facilities. Those who favor human suffering should oppose them.

A third matter of moral concern is that, as noted above, injection facilities enable illegal activity. It could be argued that this might damage the rule of law and have negative consequences that arise when laws are ignored. The easy and obvious counter is that the laws should be changed as treating drug use as a crime rather than a health issue has proven to be a costly disaster. Even if the laws were not changed, it can be argued that morality trumps the law. After all, if people should obey the law because it is the right thing to do, then unjust or immoral laws would be self-undercutting. The cynical might also note that the rule of law has been openly shown to be a lie and to allow people to suffer for this delusion would be a grave moral mistake.

A final point is that the utilitarian approach could be rejected in favor of another moral theory. There are many other approaches to ethics. For example, under some moral theories actions are inherently good or bad. On such a view, enabling drug use could be regarded as wrong, even if the consequences were positive. While this sort of view can provide the satisfaction of being among the righteous, it can impose a high cost on others, such as those dying from overdoses. But to be fair to these moral theories, they also provide the foundation for moral arguments against views that terrible means can be justified by the ends.

Considering the above arguments, while there are some concerns about the ethics of aiding people in using drugs, the strongest moral case favors injection facilities. As such, the laws should be changed to allow them to operate legally.

 

While most of the earth’s surface is covered in water, there are ever increasing water shortages. One cause is obvious: the human population is increasing and the same amount of water is being spread among an increasing number of people. So, there is less water per person as our population increases.

Water is also being used in more ways than before, and industrializing countries have increased their water use. To illustrate, AI, manufacturing and agriculture use massive amounts of water, often in places that are ill-suited to such activity. In some cases, water can be reclaimed and re-used, but not always.  

It is not just the amount of water that matters, but what it can be safely used for. As we contaminate water, we decrease the usable water supply.  In some cases, we transform it from a resource to a waste that must be sealed away. Industrial chemicals, fertilizers, and even radioactivity are examples of water contaminants. Fracking, for example, contaminates water—even when it is done properly. While contaminated water can sometimes be re-used, it is usually unfit for human consumption. While it can be argued that contamination is limited and the amount of water vast (“the solution to pollution is dilution”), the earth’s water is obviously still finite. That means that as water is contaminated, the amount of usable water is reduced. If this goes on long enough and the water is not decontaminated, the effects will be significant. While worldwide contamination is of concern, what matters to  most people is not the total available water, but what is available to them. In addition to contamination, there is also the impact of climate.

While some deny climate change or the role of humans in the process, it is well-established historical fact that the climate does change and the ruins of ancient cities attest to this. In these cases, it is the location of water that matters and shifts in climate (whatever the cause) can create zones of shortage. This is happening today, just as it happened in the past. While the total water on the earth is not really impacted by climate change, the location and quality of the water is affected. For example, while drought in one area does not mean that the earth has less overall water, it does mean that the people living there have less water. Climate change can also cause contamination. For example, my adopted state of Florida is plagued by blooms of toxic algae which might be impacted by the changing climate. While some might taunt those concerned with this for being lake huggers, these outbreaks impact what matters most to the “practical folk”, namely money. Florida, after all, generates revenue from tourism and few want to travel to look at green slime. There is also the concern with the water supply as green slime is not safe to drink. While it is possible to continue the litany of water worries, the above should suffice to show that water shortages are a concern. This raises the question of how to deal with the problem.

Environmentalists have been arguing for years that the solution is to reduce pollution and address climate change. While a reduction of pollution has been a general trend in the United States (thanks in part to Richard Nixon creating the EPA), the current political environment favors an increase in pollution and a decrease in regulation. The moral value behind this view is that environmental costs should be shifted from those who profit from causing damage to those impacted by the damage. For example, rolling back regulations on what companies can dump into the water reduces their costs, but imposes health costs on those who drink contaminated water. The principle of fairness would seem to require that those who make the profit also pay the cost, but politicians are very selective in their concerns about fairness. Because of the current political climate, we should expect an increase in water contamination.

One controversial solution is to recycle waste water, especially sewage, so that it can be used as potable water. While recycling always involves some loss, this would allow cities to address water shortages by reusing their water. It would also have environmental benefits, if the waste was dealt with properly (and, interestingly, sewerage can provide valuable raw materials).

One major obstacle is the cost as recycling water for human consumption requires infrastructure. However, this cost can be worth it in the face of water shortages. It is, after all, probably cheaper and more convenient to recycle water than to transport water (and that water must obviously come from somewhere).

Another major obstacle is psychological. Many people find the idea of drinking water that was recycled from sewage  distasteful, even if the  recycled water is cleaner than the water they currently consume. To be fair, there are real concerns about ensuring the water is properly treated and improperly recycled sewage could contain harmful microbes or chemicals. But these are problems that can (and have) been addressed so that recycled sewage is no riskier than a conventional water supply (and perhaps less so in many places).

Even when people accept treated water as safe, the distaste problem remains because some think that drinking water that was recently sewage is gross, even though the water is pure and safe to drink. As such, simply proving it is safe will not solve this psychological problem.

This is analogous to proposals to use processed insects as a food source. Even if the food is indistinguishable from “normal” food, clean, healthy and nutritious, many people think this is gross. This includes people who regularly devour parts of animal corpses (better known as “meat”)

Since this is a problem of feeling rather than reason, the solution would need to focus on changing how people feel about recycled water so they can reason about it. One possible approach is by telling the story of water in general. With a little reflection, people understand that tap water has been recycled countless times. Any water you recently drank was most likely filtered through the kidneys of many creatures over the millennia and probably passed through many humans. It might have even passed through you at one point. As such, all the water we consume is recycled already and was almost certainly disgusting (vulture vomit, for example) at one point. However, the process of cleaning it, , cleans it: the water is then fine to drink. As such, if a person is willing to drink any water, then they should also be willing to drink properly recycled water. Water that was just recycled properly from sewage is no more disgusting than water that was once part of vulture vomit and is now in your coffee or bottled water.

People can still say that it is proximity that matters. Recycled water was just recently sewage, but their bottled water or coffee has (probably) not been vulture vomit for a long time. From a rational standpoint this difference should be irrelevant: clean water is clean water, regardless of how long it has been clean. Unless one believes in some sort of mystical or metaphysical contamination that is undetectable by empirical means, then the rejection of safe recycled water would be unfounded. However, unfounded and irrational beliefs drive much of politics and human decision making in general, so the practical challenge is to influence people to not be disgusted by recycled water. Some might be won over by other feelings, such as positive feelings about the environment or the survival instinct (recycled water is preferable to no water). Hard core campers and hikers, who have sucked up bog water through a filtration straw, might be the easiest people to win over.  But such psychological manipulation goes beyond the scope of philosophy, so I will leave this matter to the experts in that field.