The United States, like all societies, suffers from many ills. This includes such things as mental illness, homelessness and drug addiction. There are many ways that these problems could be addressed. Unfortunately, the usual approach has been to try to “solve” them by law enforcement and criminalization. I will briefly consider the failures of this approach in these cases.

In the 1980s there was a major shift in America’s approach to mental illness: in the name of fiscal savings, the mentally ill were released from the hospitals into the community. One major impact of this was an increase in the number of homeless people. 20-25% of the homeless suffer from severe mental illness, compared to 6% of the entire population. The mentally ill who are homeless, as one might suspect, are usually not treated. People with untreated severe mental illnesses often behave in ways that the public finds problematic, which often leads to their being arrested and imprisoned. Prisons are ill-equipped to deal with the mentally ill and mainly serve to warehouse such people until they serve their sentences. Having a criminal record makes matters worse and makes it more likely they will simply be returned to prison and remain untreated, thus creating a cruel cycle which offers little chance for escape.

The criminalization of mental illness has not solved the problem and has made it worse. As such, it is a failure if the goal was to help people. It has not helped treat people and the cost of operating mental health institutions has been replaced with the cost of maintaining prisons. There are those who profit from this system; but it costs society dearly.

It is also a moral failure. On utilitarian grounds, it is morally wrong because it has increased rather than decreased unhappiness. For moral systems that focus on obligations to the wellbeing of others (such as the version of Christianity that embraces the parable of the good Samaritan), this approach is also a moral failure. As such, criminalizing mental illness has proven a resounding failure.

While mental illness leads many to the streets, America’s economic system also makes people homeless. Many unhoused people end up that way due to being bankrupted by medical expenses. Since the homeless have no homes, they tend to live and sleep in public areas. As would be expected, the presence of the homeless in such areas is seen as a problem and some cities try to “solve” the problem by criminalizing such things as lying down or camping in public areas. The ordinances typically impose fines, but since the homeless generally cannot afford to pay fines they usually end up in the criminal justice system which is often a pathway to prison. A criminal record only makes matters worse for the homeless and increases the chance they will remain homeless. This means that they are likely to be arrested again for breaking the ordinances that target the homeless, thus creating another cruel circle.

As might be suspected, this approach to homelessness comes with a significant monetary cost. For example, Denver spent over $750,000 enforcing its homeless targeting ordinances. Other cities pay comparable costs, making the criminalization of homelessness costly to everyone. There have been some efforts to address homelessness through other means, such as providing affordable housing, but dealing with the underlying causes is challenging given the values of those making the decisions.

Once again, trying to solve a problem through criminalization proves to be a terrible approach if the goal is to solve a social problem. Even on the heartless grounds of saving money, it fails as the cost of policing the homeless seems to consume whatever savings might be accrued for pushing people into the streets. This, of course, could be countered. One might be able to show that the monetary cost of strategies aimed at getting the homeless into homes would exceed the cost of policing the homeless on the streets. After all, the politicians could lower the cost significantly simply by not policing the unhoused who do not commit serious crimes, such as robbery. This, however, does still leave the homeless without homes and this can impose other economic costs, such medical expenses paid for by the public. This could be countered by arguing that the homeless should be completely abandoned as this would yield financial savings.

Such abandonment does, however, run into a moral challenge. The harm suffered by the homeless (and society) would seem to make a compelling utilitarian moral argument in favor of approaches that aim at getting the homeless back into society. Moral views that accept that people should love one another also enjoin us to not abandon our fellows. In any case, criminalizing homelessness is no solution, financial or moral.

Drug addiction is another problem that has largely been addressed by criminalization. About half of the people in federal prisons and 16% of those in state prison are there for drug offenses. This is the result of the war on drugs, which endeavored to solve the drug problem by arresting our way out of it. Since the negative consequences of this approach fell mainly on minorities and the poor, there was little interest among politicians in taking a different approach. However, as prison populations swelled and public attitudes towards drug use changed, there was some talk of reconsidering this war. The biggest change in the public discussion arose from the opioid epidemic, a drug epidemic that goes beyond ravaging the poor and minorities to impacting the white middle class.  This has resulted in some changes in the approach to the problem, such as the police offering free treatment for drug users rather than arresting them. It does remain to be seen if these changes will be lasting and widespread. However, this is certainly a positive change to a failed approach to the health issue of drugs.

While some for-profit prisons have done well for their shareholders in the war on drugs, the financial cost to society has been substantial. Criminalization of addiction has also failed to reduce addiction. As such, this approach has proved to be a practical failure, unless you are a shareholder in a private prison or otherwise profit from this situation.

As above, there are also moral concerns about this approach in terms of the harms being inflicted on individuals and society as a whole. Fortunately, there is a chance that America will rethink the war on drugs (in which we are the enemy) and recast it as a health issue. This not only has the potential to be far more of a practical success; it also would seem to be the right thing to do morally. Transforming people in need into criminals cannot solve the ills of society, addressing those needs can.

Back in 2016 Martin Shkreli became the villain of drug pricing when he increased the price of a $13.50 pill to $750. While buying up smaller drug companies and increasing prices products is a standard profit-making venture, the scale of the increase and Shkreli’s attitude drew attention to this incident. Unfortunately, while the Shkreli episode briefly caught the public’s attention, drug pricing is an ongoing problem.

For consumer, the main problem is that drugs are priced extremely high, sometimes high enough to bankrupt patients. In the face of public criticism, drug companies attempt to justify the high prices. One reason they give is that they need to charge these prices to pay the R&D costs of the drugs. While a company does have the right to pass on the cost of drug development, the facts tell another story about the pricing of drugs.

First, about 38% of the basic research science was funded by taxpayer money.  Thus, the public was paying twice: once in taxes and again for the drugs. This, of course, leaves a significant legitimate area of expenses for companies, but hardly enough to warrant absurdly high prices. As the federal budget for this research is cut, companies will be able to make a better argument based on the cost of research as they will need to spend more of their profits for research.

Second, most large drug companies spend almost twice as much on promotion and marketing as they do on R&D. While these are legitimate business expenses, this undercut using R&D expenses to justify excessive drug prices. Saying that pills are expensive because of the cost of marketing pills would not be a very effective strategy. There is also the issue of the ethics of advertising drugs, which is another matter entirely.

Third, many “new” drugs are just slightly modified old drugs. Common examples including combining two older drugs to create a “new” drug, changing the delivery method (from an injectable to a pill, for example) or altering the release time. In many cases, the government will grant a new patent for these minor tweaks, and this will grant the company up to a 20-year monopoly on the product, preventing competition. This practice, though obviously legal, is sketchy. To use an analogy, imagine a company holding the patents on a wheel and on an axle. Then, when those patents expired, they patented wheel + axle as a “new” invention. That would be absurd.

Companies also try other approaches to justify the high cost, such as arguing that the drugs treat serious conditions or can save money by avoiding a more expensive treatment. While these arguments do have some appeal, it is morally problematic to argue that the price of a drug should be based on the seriousness of the condition it treats. This seems like a protection scheme or coercion amounting to “pay what we want, or you die.” The money-saving argument is less odious but is still problematic. By this logic, car companies should be able to much more for safety features since they protect people from expensive injuries. It is, of course, reasonable to make a profit on products that provide significant benefits, but there need to be moral limits to the profits.

The obvious counter to my approach is to argue that drug prices should be set by the free market: if people are willing to pay large sums for drugs, then the drug companies should be free to charge those prices. After all, companies like Apple and Porsche sell expensive products without (generally) being demonized for making profits.

The easy response is that luxury cars and Macbooks are optional luxuries that a person can easily do without and there are many cheaper (and better) alternatives. However, drug companies sell drugs that are necessary for a person’s health and even survival. They are usually not optional products. There is also the fact that drug companies enjoy patent protection that precludes effective competition. While Apple does hold patents on its devices, there are many competitors. For example, if you don’t want to pay a premium for an Apple computer, you have your pick of thousands of options. But, if you need certain medications, your options can be much more limited.  

While defenders of drug prices laud the free market and decry “government interference”, their ability to charge high prices depends on the “interference” of the state. As noted above, the United States and other governments issue patents to drug companies that grant them exclusive ownership. Without this protection, a company that wanted to charge $750 for a $13.50 pill would find competitors rushing to sell the pill for far less. After all, it would be easy enough for a competitor to analyze a drug and produce it. By accepting the patent system, the drug companies accept that the state has a right to engage in legal regulation in the drug industry, to replace the invisible hand with a very visible hand of the state. Once this is accepted, the door is opened to allowing additional regulation on the grounds that the state will provide protection for the company’s property using taxpayer money in return for the company agreeing not to engage in harmful pricing of drugs. Roughly put, if the drug companies expect people to obey the social contract with the state, they also need to operate within the social contract. Companies could, of course, push for a truly free market: they would be free to charge whatever they want for drugs without state interference, but there would be no state interference into the free market activities of their competitors when they duplicate the high price drugs and start undercutting the prices. But, as always, companies want a free market when freedom benefits them and a nanny state when it benefits them.

In closing, if the drug companies want to keep the patent protection they need for high drug prices, they must be willing to operate within the social contract. After all, citizens should not be imposed upon to fund the protection of the people who are, some might claim, robbing them.

 

Modern agriculture deserves praise for the good it does. Food is plentiful, relatively cheap and easy to acquire. Instead of having to struggle with raising crops and livestock or hunting and gathering, many Americans can go to the grocery store and get the food we need to stay alive. However, as with all things, there is a price.

The modern agricultural complex is highly centralized and industrialized, which has advantages and disadvantages. There are also the harms of practices aimed at maximizing profits. While there are many ways to maximize profits, two common ones are to pay the lowest wages possible and to shift costs to others. I will look, briefly, at one area of cost shifting: the widespread use of antibiotics in meat production.

While most think of antibiotics as a means of treating diseases, healthy food animals are routinely given antibiotics. One reason is to prevent infections: factory farming techniques, as might be imagined, vastly increase the chances of a disease spreading. Antibiotics, it is claimed, can help reduce the risk of bacterial infections (antibiotics are useless against viruses). A second reason is that antibiotics increase the growth rate of healthy animals, allowing them to pack on more meat in less time and time is money. These uses allow the industry to continue factory farming and maintain high productivity, which initially seems laudable. The problem is, however, the use of antibiotics comes with a high price that is paid for by everyone else.

Eric Schlosser wrote “A Safer Food Future, Now”, which appeared in the May 2016 issue of Consumer Reports. In this article, he noted that this practice has contributed significantly to the rise of antibiotic resistant bacteria. Each year, about two million Americans are infected with resistant strains and about 20,000 people die. The healthcare cost is about $20 billion. To be fair, the agricultural industry is not the only contributor to this problem: improper use of antibiotics in humans has also added to this problem. That said, the agricultural use of antibiotics accounts for about 75% of all antibiotic usage in the United States, thus converting the factory farms into farms for resistant bacteria.

The harmful consequences of this antibiotic use have been known for years and there have been attempts to address this through legislation. It is no surprise that our elected leaders have failed to act. One likely explanation is the lobbying power of corporations. In the United States, both parties prioritize profits over the people. But it could be contended that lawmakers are ignorant of the harms, doubt there are harms from antibiotics or honestly believe that the harms arising are outweighed by the benefits. That is, the lawmakers have credible reasons other than the money they are paid to do the will of the wealthy. This is a factual matter, but no professional politician who has been swayed by lobbying will attribute her decision to anything other than good intentions.

This matter is one of ethical concern and, like most large-scale ethical matters involving competing interests, is one best approached by utilitarian considerations. On the side of using antibiotics, there is the increased productivity (and profits) of the factory farming system. This allows more and cheaper food to be provided for the population, which can be regarded as pluses. The main reasons to not use the antibiotics, as noted above, are that they contribute to the creation of antibiotic-resistant strains that sicken and kill people. This imposes costs on those who are sickened and killed as well as those who care about them. There are also the monetary costs in the health care system (although the increased revenue can be tagged as a plus for health care providers). In addition to these costs, there are also other social and economic costs, such as lost hours of work. As this indicates, the cost (illness, death, etc.) of the use of the antibiotics is shifted: the industry does not pay these costs, they are paid by everyone else. Including other industries.

Using a utilitarian calculation requires weighing the cost to the general population against the profits of the industry and the claimed benefits to the general population. Put roughly, the moral question is whether the improved profits and greater food production outweigh the illness, deaths and costs suffered by the public. Most politicians seem to believe that the answer is “yes.”

If the United States were in a food crisis in which the absence of the increased productivity afforded by antibiotics would cause more suffering and death than their presence, then their use would be morally acceptable. However, this does not seem to be the case. While banning this sort of antibiotic use would decrease productivity (and impact profits), the harm of doing this would seem to be vastly exceeded by the reduction in illness, deaths and health care costs. However, if an objective assessment of the matter showed that the ban on antibiotics would not create more benefits than harm, then it would be reasonable and morally acceptable to continue to use them. This is partially a matter of value (in terms of how the harms and benefits are weighted) and partially an objective matter (in terms of monetary and health costs). I am inclined to agree that the general harm of using the antibiotics exceeds the general benefits, but I could be convinced otherwise by objective data.

 

All professions have their problem members, and the field of medicine is no exception. Fortunately, the percentage of bad doctors is low—but this small percentage can do considerable harm. After all, when your professor is incompetent, you might not learn as much as you should. If your doctor is incompetent, they could kill you.

Back in 2016 Consumer Reports published an article by Rachel Rabkin Peachman covering bad doctors and the difficulty patients face in learning whether a physician is a good doctor or a disaster. Unfortunately, not much has changed since then.

There are three main problems. The first is that there are bad doctors. The article presented numerous examples to add color to the dry statistics, and this includes such tales of terror as doctors molesting patients, doctors removing healthy body parts, and patient deaths due to negligence, impairment or incompetence. These are obvious all moral and professional failings on the part of the doctors, and they should clearly not be engaged in such misdeeds. For more recent examples, John Oliver provides disturbing coverage of the dangers presented by med spas.

The second is that, according to Peachman, the disciplinary actions tend to be rather less than ideal. While doctors should enjoy the protection of a due process, the hurdles are, perhaps, too high. There is also the problem that the responses are often very mild. For example, a doctor whose negligence has resulted in the death of patients can be allowed to keep practicing with minor limitations. As another example, a doctor who has been engaged in sexual misconduct might continue practicing after a class on ethics and with the requirement that someone else be present when he is seeing patients. In addition to the practical concerns about this, there is also the moral concern that the disciplinary boards are failing to protect patients.

One possible argument against harsher punishments is that there is always a shortage of doctors and taking a doctor out of practice would have worse consequences than allowing a bad doctor to keep practicing. This would be the basis for a utilitarian argument for continuing mild punishments. Crudely put, it is better to have a doctor who might kill a patient or two than no doctor at all because that would result in many more deaths.

This argument does have some appeal. However, there is the factual question of whether the mild punishments do more good than harm. If they do, then one would need to accept that this approach is morally tolerable. If not, then the argument would fail. There is also the response that consequences are not what matters and people should be reprimanded based on their misdeeds and not based on some calculation of utility. This also has some intuitive appeal.

It could also be argued that it should be left to patients to judge if they want to take the risk. If a doctor is known for sexual misdeeds with female patients but is fine with male patients, then a man who has few or no other options might decide that the doctor is his best choice. This leads to the third problem.

The third problem is that it is very difficult for patients to learn about bad doctors. While there is a National Practitioner Data Bank (NPDB), it is off limits to patients and is limited to law enforcement, hospital administration, insurance and a few other groups.

The main argument against allowing public access to the NPDB is based on the premise that it contains inaccurate information which could be harmful to innocent doctors. This makes it similar to the credit report data which is notorious for containing harmful inaccuracies that can plague people.

While the possibility of incorrect data is a matter of concern, that premise best supports the conclusion that the NPDB should be reviewed regularly to ensure that the information is accurate. While perfect accuracy is not possible, surely the information can meet a reasonable standard of accuracy. This could be aided by providing robust tools for doctors to inform those running the NPDB of errors and to inform doctors about the content of their files. As such, the error argument is easily defeated.

Patients do have some access to data about doctors, but there are many barriers in place. In some cases, there is a financial cost to access data. In almost all cases, the patient will need to grind through lengthy documents and penetrate the code of legal language. There is also the fact that this data is often incomplete and inaccurate.  While it could be argued that a responsible patient would expend the resources needed to research a doctor, this is an unreasonable request, and a patient should not need to do all this just to know that the doctor is competent. One reason for this is that someone seeking a doctor is likely to be sick or injured and expecting them to add on the burden of a research project is unreasonable. Also, a legitimate role of the state is to protect citizens from harm and having a clear means of identifying bad doctors would seem to fall within this.

Given the above, it seems reasonable to accept that a patient has the right to know about her doctor’s competence and should have an easy means of acquiring accurate information. This enables a patient to make an informed choice about her physician without facing an undue burden. This will also help the profession as good doctors will attract more patients and bad doctors will have a greater incentive to improve their practice.

As mentioned in my previous essay, Isis (my Siberian husky) fell victim to the ravages of time. Once a sprinting blur of fur, she was reduced to sauntering. Still, lesser beasts feared her (and to a husky, all creatures are lesser beasts) and the sun was warm in the backyard, so her life was good even at the end.  

Faced with the challenge of keeping her healthy and happy, I relied a great deal on what I learned as a philosopher. As noted in the preceding essay, my philosophical skills kept me from falling victim to the post hoc fallacy and the fallacy of anecdotal evidence. In this essay I will focus on two basic, but extremely useful methods of causal reasoning.

One of the most useful tools for causal reasoning is the method of difference. This method was famously developed by the philosopher John Stuart Mill and has been a staple in critical thinking classes since before my time. The purpose of the method is figuring out the cause of an effect, such as a husky suffering from a knuckling paw (a paw that folds over, so the dog is walking on the top of the foot rather than the bottom). The method can also be used to try to sort out the effect of a suspected cause, such as the efficacy of an herbal supplement in treating canine arthritis.

Fortunately, the method is simple. To use it, you need at least two cases: one in which the effect has occurred and one in which it has not. In terms of working out the cause, more cases are better, although more cases of something bad (like arthritis pain) would be undesirable from other standpoints. The two cases can involve the same individual at different times as it need not be different individuals (though it also works in those cases as well). For example, when sorting out Isis’ knuckling problem the case in which the effect occurred was when Isis was suffering from knuckling and the case in which it did not was when Isis was not suffering from this problem. I also investigated other cases in which dogs suffered from knuckling issues and when they did not.

The cases in which the effect is present and those in which it is absent are then compared to determine the difference between the cases. The goal is to sort out which factor or factors made the difference. When doing this, it is important to keep in mind that it is easy to fall victim to the post hoc fallacy and conclude without adequate evidence that a difference is a cause because the effect occurred after that difference. Avoiding this mistake requires considering that the “connection” between the suspected cause and the effect might be a coincidence. For example, Isis ate some peanut butter the day she started knuckling, but it is unlikely that had any effect, especially since she ate peanut butter since we became a pack. It is also important to consider that an alleged cause might be an effect caused by a factor that is also producing the effect one is concerned about. For example, a person might think that a dog’s limping is causing knuckling, but they might both be effects of a third factor, such as arthritis or nerve damage.

You must also keep in mind the possibility of reversed causation, which is when the alleged cause is the effect. For example, a person might think that limping is causing knuckling, but it might turn out that the knuckling is the cause of the limping.

In some cases, sorting out the cause can be easy. For example, if a dog slips and falls, then has trouble walking, the most likely cause is the fall. But it could still be something else. In other cases, sorting out the cause can be difficult. It might be because there are many possible causal factors. For example, knuckling can be caused by many things (even Lyme disease). It might also be because there are no clear differences (such as when a dog starts limping with no clear preceding event). One useful approach is to do research using reliable sources. Another, which is a good idea with pet problems, is to refer to an expert, such as a vet. Medical tests, for example, are useful for sorting out the differences and finding a likely cause.

The same basic method can also be used in reverse, such as determining the effectiveness of a dietary supplement for treating canine arthritis. For example, when Isis started slowing down and showing signs of soreness, I started giving her senior dog food, glucosamine and extra protein. What followed was an improvement in her mobility and the absence of soreness. While the change might have been a mere coincidence, it is reasonable to consider that one or more of these factors helped her. After all, there is some scientific evidence that diet can have an influence on these things. From a practical standpoint, I decided to keep to this plan since the cost of the extras is low, they have no harmful side effects, and there is some indication that they work. I did consider that I could be wrong. Fortunately, I did have good evidence that the steroids Isis was prescribed worked as she made a remarkable improvement after starting them and there is solid scientific evidence that they are effective at treating pain and inflammation. As such, it is rational to accept that the steroids were the cause of her improvement, though this could also be a coincidence.

The second method is the method of agreement. Like difference, this requires at least two cases. Unlike difference, the effect is present in all cases. In this method, the cases exhibiting the effect (such as knuckling) are considered to find a common thread in all the cases. For example, each incident of knuckling would be examined to determine what they all have in common. The common factor (or factors) that is the most plausible cause of the effect is what should be taken as the likely cause. As with the method of difference, it is important to consider such factors as coincidence to avoid falling into a post hoc fallacy.

The method of agreement is most often used to form a hypothesis about a likely cause. The next step is, if possible, to apply the method of difference by comparing similar cases in which the effect did not occur. Roughly put, the approach would be to ask what all the cases have in common, then determine if that common factor is absent in cases in which the effect is also absent. For example, a person investigating knuckling might begin by considering what all the knuckling cases have in common and then see if that common factor is absent in cases in which knuckling did not occur.

One of the main weaknesses of these methods is that they tend to have very small sample sizes, sometimes just one individual, such as my husky. While these methods are quite useful, they can be supplemented by general causal reasoning in the form of experiments and studies, which is the subject of the next essay in this series.

My Siberian husky, Isis, joined the pack in 2004 at the age of one. It took her a little while to realize that my house was now her house. She set out to chew all that could be chewed, presumably as part of some sort of imperative of destruction. Eventually, she came to realize that she was chewing her stuff. More likely, joining me on 16-mile runs wore the chew out of her.

As the years went by, we both slowed down. Eventually, she could no longer run with me (despite my slower pace) and we went on slower adventures. One does not walk a husky; one adventures with a husky. Despite her advanced age, she remained active. After one adventure, she seemed slow and sore. She cried once in pain but then seemed to recover. Then she got worse, requiring a trip to the emergency veterinarian.

The  x-rays showed no serious damage, just an indication of the wear and tear of age. She also had some unusual test results, perhaps indicating cancer. Because of her age, the main concern was with her mobility and pain. If she could get about and be happy, then that was what mattered. She was prescribed medications, and a follow up appointment was scheduled with the regular vet. By then, she had gotten worse in some ways, and her right foot was “knuckling” over, making walking difficult. This is often a sign of nerve issues. She was prescribed steroids and had to go through a washout period before starting the new medicine. As might be imagined, neither of us got much sleep during this time.

For a while the steroids worked and she could go on slow adventures and enjoy basking in the sun while watching the birds and squirrels, willing the squirrels to fall from the tree and into her mouth.

While philosophy is often derided as useless, it was very helpful to me during this time and I decided to write about this usefulness as both a defense of philosophy and, perhaps, as something useful for others who face similar circumstances with an aging canine.

Isis’ emergency visit was focused on pain management and one drug she was prescribed was Carprofen (more infamously known by the name Rimadyl). Carprofen is an NSAID that is supposed to be safer for canines than those designed for humans (like aspirin) and is commonly used to manage arthritis in elderly dogs. Being curious and cautious, I researched all the medications. I ran across forums which included people’s sad and often angry stories about how Carprofen killed their pets. The typical story involved what one would expect: a dog was prescribed Carprofen and then died or was found to have cancer shortly thereafter. I found such stories worrisome and was concerned as I did not want my dog to be killed by her medicine. But I also knew that without medication, she would be in terrible pain and unable to move. I wanted to make the right choice for her and knew this would require making a rational decision.

My regular vet decided to go with the steroid option, one that also has the potential for side effects and there were horror stories on the web. Once again, it was a matter of choosing between the risks of medication and the consequences of doing without. In addition to my research into medication, I also investigated various other options for treating arthritis and pain in older dogs. She was already on glucosamine (which might not be beneficial, but seems to have no serious side effects), but the web poured forth an abundance of options ranging from acupuncture to herbal remedies. I even ran across the claim that copper bracelets could help pain in dogs. They cannot.

While some alternatives had been subject to scientific investigation, most discussions involved a mix of miracles and horror stories. One person might write glowingly about how an herbal product brought his dog back from death’s door while another might claim that the same product killed his dog. Sorting through all these claims, anecdotes and studies turned out to be a lot of work. Fortunately, I had numerous philosophical tools that helped, specifically of the sort where it is claimed that “I gave my dog X, then he got better (or died) and X was the cause.” Knowing about two common fallacies is very useful in these cases.

The first is what is known as Post Hoc Ergo Propter Hoc (“after this, therefore because of this”).  This fallacy has the following form:

 

Premise: A occurs before B.

Conclusion: Therefore, A is the cause of B.

 

This fallacy is committed when it is concluded that one event causes another just because the alleged cause occurred before the alleged effect. More formally, the fallacy involves concluding that A causes or caused B because A occurs before B and there is not sufficient evidence to warrant such a claim.

While cause does precede effect (at least in the normal flow of time), proper causal reasoning involves sorting out whether A occurring before B is just a matter of coincidence or not. In the case of medication involving an old dog, it could be a coincidence that the dog died or was diagnosed with cancer after the medicine was administered. That is, the dog might have died anyway or might have already had cancer. Without a proper investigation, simply assuming that the medication was the cause would be an error. The same holds true for beneficial effects. For example, a dog might go lame after a walk and then recover after being given an herbal supplement. While it would be tempting to attribute the recovery to the herbs, they might have had no effect at all. After all, lameness often goes away on its own or some other factor might have been the cause.

This is not to say that such stories should be rejected out of hand, but they should be approached with due consideration that the reasoning involved is post hoc. In concrete terms, if you are afraid to give your dog medicine she was prescribed because you heard of cases in which a dog had the medicine and then died, you should investigate more (such as talking to your vet) about whether there is a risk of death. As another example, if someone praises an herbal supplement because her dog perked up after taking it, then you should see if there is evidence for this claim beyond the post hoc situation.

Fortunately, there has been considerable research into medications and treatments that provide a basis for making a rational choice. When considering such data, it is important not to be lured into rejecting data by the seductive power of the Fallacy of Anecdotal Evidence.

This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is considered by some to be a variation on hasty generalization.  It has the following forms:

 

Form One

Premise: Anecdote A is told about a member (or small number of members) of Population P.

Conclusion: Claim C is made about Population P based on Anecdote A.

 

For example, a person might hear anecdotes about dogs that died after taking a prescribed medication and infer that the medicine is likely to kill dogs.

 

Form Two

Premise 1: Reasonable statistical evidence S exists for general claim C.

Premise 2:  Anecdote A is presented that is an exception to or goes against general claim C.

Conclusion: General claim C is rejected.

 

For example, statistical evidence shows that the evidence that glucosamine-chondroitin can treat arthritis is, at best, weak. But a person might tell a story about how their aging husky “was like a new dog” after she started taking it. supplement. To accept this as proof that the data is wrong would be to fall for this fallacy. That said, I did give my husky glucosamine-chondroitin because it is affordable, has no serious side effects and might have some benefit. I am fully aware of the data and do not reject it, I gambled that it might have done her some good.

The way to avoid becoming a victim of anecdotal evidence is to seek reliable, objective statistical data about the matter in question (a credible vet would be a good source). This can be a challenge when it comes to treatments for pets. In many cases, there are no adequate studies or trials that provide statistical data and only anecdotal evidence is available. One option is, of course, to investigate the anecdotes and try to do your own statistics. So, if most anecdotes indicate something harmful (or something beneficial) then this would be weak evidence for the claim. In any case, it is wise to approach anecdotes with due care  as a story is not proof.

The American right is partially defined by its embracing debunked conspiracy theories such as the big lie about the 2020 election and those involving all things COVID. While some conspiracy theories are intentionally manufactured by those who know they are untrue (such as the 2020 election conspiracy theories) other theories  might start by people being bad at reading things correctly. For example, consider the claim that there were microchips in the COVID vaccines because of Bill Gates.

The Verge does a step-by-step analysis of how this conspiracy theory evolved, which is an excellent example of how conspiracy claims arise, mutate, and propagate. The simple version is this: in a chat on Reddit, Gates predicted that people would have a digital “passport” of their health records. Some Americans who attended K-12 public schools have already used a paper version of this.  I have my ancient elementary school health records, which I recently consulted to confirm I had received my measles booster as a kid. As this is being written, measles has returned to my adopted state of Florida. The idea of using tattoos to mark people when they are vaccinated has also been suggested as a solution to the problem of medical records in places where record keeping is spotty or non-existent.

Bill Gate’s prediction was picked up by a Swedish website focused on biohacking which proposed using an implanted chip to store this information. This is not a new idea for biohackers or science fiction, but it was not Gate’s idea. However, the site used the untrue headline, “Bill Gates will use microchip implants to fight coronavirus.” As should surprise no one, the family tree of the conspiracy leads next to my adopted state of Florida.

Pastor Adam Fannin of Jacksonville read the post and uploaded a video to YouTube. The title is “Bill Gates – Microchip Vaccine Implants to fight Coronavirus,” which is an additional untruth on top of the untrue headline from the Swedish site. This idea spread quickly until it reached Roger Stone. The New York Post ran the headline “Roger Stone: Bill Gates may have created coronavirus to microchip people.”

Those familiar with telephone might see this as a dangerous version as each person changes the claim until it has almost no resemblance to the original. Just as with games of telephone, it is worth considering that people intentionally made changes. In the case of a game of telephone, the intent is to make the final version funny. In the case of conspiracy theories, the goal is to distort the original into the desired straw man. In the case of Bill Gates, it started out with the innocuous idea that people would have a digital copy of their health records and ended up with the claim that Bill Gates might have created the virus to put chips in people. In addition to showing how conspiracy claims can devolve from innocuous claims, this also provides an excellent example of how conspiracy theories sometimes do get it right that we should be angry at someone or something but get the reasons why we should be angry wrong.

While there is no good evidence for the conspiracy theories about Gates and microchips, it is true that we should be angry at Bill Gate’s COVID wrongdoings. Specifically, Gates used his foundation to impede access to COVID vaccines. This was not a crazy supervillain plan; it was “monopoly medicine.” As such, you should certainly loath Bill Gates for his immoral actions; but not because of the false conspiracy theories. As an aside, it is absurd that when there are so many real problems and real misdeeds to confront, conspiracy theorists spend so much energy generating and propagating imaginary problems and misdeeds. Obviously, these often serve some people very well by distracting attention from these problems. But back to the origin of conspiracy theories.

While, as noted above, people do intentionally make false claims to give birth to conspiracy theories, it also makes sense that unintentional misreading can be a factor. Having been a professor for decades, I know that people often unintentionally misread or misinterpret content.

For the most part, when professors are teaching basic and noncontroversial content, they endeavor to prove the students with a clear and correct reading or interpretation. Naturally, there can be competing interpretations and murky content in academics, but I am focusing on the clear, simple stuff where there is general agreement and little or no opposition. And, of course, no one with anything to gain from advancing another interpretation. Even in such cases, students can badly misinterpret things. To illustrate, consider this passage from the Apology:

 

Socrates: And now, Meletus, I will ask you another question—by Zeus I will:  Which is better, to live among bad citizens, or among good ones?  Answer, friend, I say; the question is one which may be easily answered.  Do not the good do their neighbors good, and the bad do them evil?

 

Meletus: Certainly.

 

Socrates: And is there anyone who would rather be injured than benefited by those who live with him?  Answer, my good friend, the law requires you to answer— does any one like to be injured?

 

Meletus: Certainly not.

 

Socrates: And when you accuse me of corrupting and deteriorating the youth, do you allege that I corrupt them intentionally or unintentionally?

 

Meletus: Intentionally, I say.

 

Socrates: But you have just admitted that the good do their neighbors good, and the evil do them evil.  Now, is that a truth which your superior wisdom has recognized thus early in life, and am I, at my age, in such darkness and ignorance as not to know that if a man with whom I have to live is corrupted by me, I am very likely to be harmed by him; and yet I corrupt him, and intentionally, too—so you say, although neither I nor any other human being is ever likely to be convinced by you.  But either I do not corrupt them, or I corrupt them unintentionally; and on either view of the case you lie.  If my offence is unintentional, the law has no cognizance of unintentional offences: you ought to have taken me privately, and warned and admonished me; for if I had been better advised, I should have left off doing what I only did unintentionally—no doubt I should; but you would have nothing to say to me and refused to teach me.  And now you bring me up in this court, which is a place not of instruction, but of punishment.

 

Socrates’ argument is quite clear and, of course, I go through it carefully because this argument is part of the paper for my Introduction to Philosophy class. Despite this, every class has a few students who read Socrates’ argument as him asserting that he did not corrupt the youth intentionally because they did not harm him. But Socrates does not make that claim; central to his argument is the claim that if he corrupted them, then they would probably harm him. Since he does not want to be harmed, then he either did not corrupt them or did so unintentionally. This is, of course, an easy misinterpretation to make by reading into the argument something that is not there but seems like it perhaps should or at least could be. Students are even more inclined to read Socrates as claiming that the youth will certainly harm him if he corrupts them and then build an argument around this erroneous reading. Socrates claims that the youth would be very likely to harm him if he corrupted them and so he was aware that he might not be harmed.

My point is that even when the text is clear, even when someone is actively providing the facts, even when there is no controversy, and even when there is nothing to gain by misinterpreting the text, it still occurs. And if this can occur in ideal conditions (a  clear, uncontroversial text in a class), then it should be clear how easy it is for misinterpretations to arise in “the wild.” As such, a person can easily misinterpret text or content and sincerely believe they have it right—thus leading to a false claim that can give rise to a conspiracy theory. Things are much worse when a person intends to deceive. Fortunately, there is an easy defense against such mistakes: read more carefully and take the time to confirm that your interpretation is the most plausible. Unfortunately, this requires some effort and the willingness to consider that one might be wrong, which is why misinterpretations occur so easily. It is much easier to go with the first reading (or skimming) and more pleasant to simply assume one is right.

During the last pandemic, Americans who chose to forgo vaccination were hard hit by COVID. In response, some self-medicated with ivermectin. While this drug is best known as a horse de-wormer, it is also used to treat humans for a variety of conditions and many medications are used to treat conditions they were not originally intended to treat. Viagra is a famous example of this. As such, the idea of re-purposing a medication is not itself foolish. But there are obvious problems with taking ivermectin to treat COVID. The most obvious one is that there is not a good reason to believe that the drug is effective; people would be better off seeking established treatment. Another problem is the matter of dosing as the drug can have serious side-effects even at the correct dosage. Since I am not a medical doctor, my main concern is not with the medical aspects of the drug, but with epistemology. That is, I am interested in why people believed they should take the drug when there is credible evidence it would work. Though the analysis will focus on ivermectin, the same mechanisms work broadly in belief formation.

Those who were most likely to use the drug were people in areas hit hard by COVID and subject to anti-vaccine and anti-mask messages from politicians and pundits. These two factors are related: when people do not get vaccinated and do not take precautions against infection, then they are more likely to get infected. This is why there was such a clear correlation between COVID infection rates and the level of Trump support in an area. Republican political thought embraces authoritarianism and rejects of expertise. Conservatives also want to “own the libs” by rejecting their beliefs and making liberals mad. Many liberals wanted people to get vaccinated and wear masks, so “owning the libs” put a person at greater risk for COVID. Once a person got infected, they needed treatment. But why did they chose ivermectin over proven methods? This seems to be the result of how the right’s base forms their beliefs.

The right’s base seems especially vulnerable to grifters and thus inclined to believe what grifters tell them. This is not because they are less intelligent or less capable than liberals; rather it seems to result from two main factors. The first is that the American right tends to be more authoritarian and thus more inclined to believe when an authority figure tells them to believe. The second is that the American right has long waged war on critical thinking and expertise. Hence people on the right are less inclined to use critical thinking tools effectively in certain contexts and are likely to dismiss experts who they do not regard as trusted authority figures.

While ivermectin was studied scientifically, there is currently no evidence that it can effectively treat COVID. But a small and growing industry arose for providing people with unproven or discredited treatments for COVID. While some might be well-intentioned, much of it is grifting at the expense of those who have been systematically misled. As such, people believe ivermectin can help them because authority figures have told them they should believe it. But, of course, there is the question of why ivermectin was chosen.

One likely reason is that ivermectin has been shown to impede the replication of the virus. Someone who is misled by wishful thinking would probably not consider the matter further; but it is important to note that this test was conducted in the laboratory using high concentrations of the drug that probably exceeded what a human could safely use. To use an analogy, this is like saying that fire is effective in killing the virus. While this is true, it does not make it an effective treatment in humans. As such, there is a bit of truth to the claim that ivermectin can affect the virus. For some reason, certain people seem to consistently reason poorly in such contexts; I am inclined to chalk this up to wishful thinking.

There is also the fact that a single, unpublished paper influenced some countries to include the drug in their treatment guidelines. However, this paper was never published because the method used to gather the data is both irregular and unreliable. The company that gathered the data, Surgisphere, is already notorious for its role in scandals involving hydroxychloroquine studies. People seem to tend to believe the first thing they hear about something, especially if they want it to be true, hence this discredited paper held considerable influence. This is like the case in which those who think vaccines are linked to autism still believe in a long discredited study by a discredited doctor.

One might attempt to respond to this by arguing that there are other papers showing the effectiveness of ivermectin. While this would be a reasonable response if these papers were based on good data, they are not. As has been shown, they suffer from serious errors. But, once again, this does not seem to matter. People such as Preston Smiles, Sidney Powell and Joe Rogan promoted the drug and, of course, Fox News personalities praised it. It was hydroxychloroquine 2.0. This takes us back to the appeal to authoritarianism fallacy: people believed because authority figures told them to believe. There is also a fallacious appeal to authority in effect. For example, Joe Rogan is a talk show host and not a doctor; yet people believe him because he is a celebrity.

People might also be motivated to accept the “evidence” of bad data and poor methods because doing so can feel rebellious. By rejecting the methodology of the experts, they can see themselves as making up their own minds…by accepting what politicians and celebrities tell them. There might also haven been a conspiracy theory element at work as well; the idea that “they” do not want people to know about ivermectin (or whatever) and hence they want to believe it works.

Ivermectin became another front in the culture war. It must be said that the left contributed to the fight by mocking those who used the drug. But when it became a political battle, the base doubled down and defended it, despite a lack of evidence. That is, they professed to believe because doing so is the stance of their tribe.

There were efforts to conduct clinical trials of the drug, but these were bizarrely been met with hostility and threats from ivermectin proponents. On the positive side, there will be some data available from the people self-medicating. Unfortunately, it will not be very good data because it will mostly be a collection of self-reported anecdotes. Once again, the culture war of the right hurt people. Although, as always, some profited.

From the standpoint of reliably forming true beliefs, this approach is the opposite of what a person should take.  Believing medical claims based on political authorities, grifters and celebrities is not a reliable way to have true beliefs. Accepting flawed studies as evidence is, by definition, a bad idea from the standpoint of believing true things. But these belief forming mechanisms do have advantages.

Politicians, celebrities, and grifters obviously benefit from their base forming beliefs this way. Those who form the beliefs also get something out of it; they can feel the pleasure of expressing their loyalty, the reassurance of wishful thinking, the warm glow of unity with their tribe, and the hot fire of angering the other tribe. And in the end, isn’t that all that really matters to some people?

In a clever bit of rhetoric, people who opposed mask and vaccine mandates during the last pandemic used pro-choice terms. For example, a person opposed to getting vaccinated might say “my body, my choice.” This phrase is, of course, a standard part of pro-choice language. While some who did this were no doubt engaged in bad faith rhetoric or trolling, the analogy between abortion rights and the right to refuse vaccination is worth considering.

An argument by analogy will typically have two premises and a conclusion. The first premise establishes the analogy by showing that the things (X and Y) in question are similar in certain respects (properties P, Q, R, etc.).  The second premise establishes that X has an additional quality, Z. The conclusion asserts that Y has property or feature Z as well. The form of the argument looks like this:

 

           Premise 1: X and Y have properties P, Q, R.

           Premise 2: X has property Z.

           Conclusion: Y has property Z.

 

X and Y are variables that stand for whatever is being compared, such as chimpanzees and humans or apples and oranges. P, Q, R, and are also variables, but they stand for properties or features that X and Y are known to possess, such as having a heart. Z is also a variable, and it stands for the property or feature that X is known to possess. The use of P, Q, and R is just for the sake of illustration-the things being compared might have more properties in common.

One simplified way to present the anti-vaccine (or pro-vaccine choice) analogy is as follows:

 

Premise 1: The right to choose an abortion is analogous to the right to choose to not be vaccinated.

           Premise 2: The right to choose an abortion is supported by the left.

           Conclusion: The right to choose to not be vaccinated should also be supported by the left.

 

While this analogy seems appealing to many anti-mask mandate folks, a key issue is whether it is a strong argument. The strength of an analogical argument depends on three factors. To the degree that an analogical argument meets these standards it is a strong argument.

First, the more properties X and Y have in common, the better the argument. This standard is based on the notion that the more two things are alike in other ways, the more likely it is that they will be alike in some other way. Second, the more relevant the shared properties are to property Z, the stronger the argument. A specific property, for example P, is relevant to property Z if the presence or absence of P affects the likelihood that Z will be present. Third, it must be determined whether X and Y have relevant dissimilarities as well as similarities. The more dissimilarities and the more relevant they are, the weaker the argument. So, is the analogy between the restrictive voter laws and mask mandates strong? To avoid begging the question by making a straw man, I will endeavor to make the best analogy I can—within the limits of truth.

The right to choose an abortion is often based on a principle of bodily autonomy; often expressed as “my body, my choice.” For the pro-choice, this principle warrants a person’s choice to have an abortion: it is their body, so it is their choice. While there is debate over the moral status of the aborted entity, an entity which might (or might not) be a person is killed by abortion. As such, the principle of bodily autonomy allows a person to kill another entity.

The right to forgo vaccination on the principle of bodily autonomy would seem to work in a similar manner. For those who are pro-choice about vaccines, this principle warrants a person’s choice to forgo vaccination: it is their body, so it is their choice. So far, so good. But, as with abortion, the choice does not just affect the person making the choice.

A person who forgoes vaccination willingly puts themselves and others in avoidable risk of infection and death. But, if a person can justly abort another entity as a matter of their choice, then one could infer that a person could thus put others at risk of illness and death as a matter of choice. But does the comparison hold here? I contend that because of critical differences, it does not.

First, while an abortion kills an entity there is good faith moral debate about whether the entity is a person. In contrast, a person who did not get vaccinated during the pandemic put those who are indisputably people at risk and, in many cases, without their choice or consent. One can, of course, argue that the aborted entity is a person and start up the anti-abortion debate. But this would have an interesting consequence.

If it is argued that the aborted entity is a person (or otherwise has sufficient moral status) and thus its right to life overrides the person’s right to bodily autonomy, then the same reasoning would apply to the pro-vaccine choice argument. Their bodily autonomy does not give them the right to put others at risk. As such, a person who argues in good faith that being pro-choice about abortions is like being pro-choice about vaccines must be for both or opposed to both. So, anti-abortion folks can only use the pro-choice bodily autonomy argument for vaccine choice in bad faith (or from confusion). In contrast, a pro-choice person need not be pro-vaccine choice. They can accept that the aborted entity is not a person or has a lower moral status than the person while accepting the obvious fact that the people who were harmed by the unvaccinated are people.

Second, an abortion kills a single entity while forgoing vaccination during pandemic puts everyone the person contacts at risk of illness and even death. Since those at risk are indisputably people, forgoing vaccination in a pandemic is far worse than an abortion. One can, of course, get into a debate about assessing harm in terms of probabilities and other considerations. For example, a person who forgoes vaccination might not infect anyone and if they do, no one they infect might get ill, and if they do get ill, then they might not die. In contrast, an abortion always kills the aborted entity. This becomes a debate about the right to harm other entities and assessing harm. But, if someone argues that a person does not have the right to harm another entity based on bodily autonomy, then this would apply to both abortion and vaccination: there should be no choice in either case.

Third, there is a difference in the cost for not being able to make the choice. If a person cannot choose an abortion, they can face great economic and social hardships. Our society is unkind to women, and it is especially unkind to mothers who lack support and resources. In contrast, the COVID vaccines are incredibly safeMuch safer than giving birth in the United States. Once again, if someone accepts the pro-vaccine choice reasoning, then they would also need to accept the pro-choice reasoning in the context of abortion.

As such, the attempt to use pro-choice language and draw an analogy between reproductive rights and anti-vaccine rights fails logically. However, some might see it as having rhetorical value or as a bit of fun in trolling the libs with their own slogans.

 

During the last pandemic, some organizations mandated vaccination against COVID-19. As another pandemic is inevitable, it is worth revisiting the moral issue of mandatory vaccination in response to a pandemic.

Schools have a well-established precedent for requiring students to be vaccinated, although there have been ways to opt out.  The moral justification is usually a utilitarian one: while there is a cost and possible harm arising from mandatory school vaccinations, this is outweighed by the harm these vaccinations prevent. Students are in close contact in closed spaces for long periods of time, putting them at risk. As such, allowing students (or, rather, their parents) to opt out of vaccines would put themselves and others at greater risk. Exemptions can, and should, be granted in cases where a person would be medically harmed by vaccination; but these are extremely rare cases. During a pandemic, the moral argument is even stronger as the risk and harm would be greater than in normal circumstances.

In terms of a moral objection to mandatory vaccinations at schools during a pandemic is that the long-term effects of a new vaccine on children and teens would not be known. As such, one could claim that possible harmful effects of the vaccine might outweigh the harms of being unvaccinated. While this is a legitimate concern, it is not unique: all past vaccines have raised the same concern. So far, the benefits have consistently outweighed the harms of vaccination. So unless there is evidence that a new vaccine presents a special problem, then it is as morally acceptable to require it during a pandemic as it was, for example, to require the polio vaccine when it was developed. This is not to deny that things can go wrong, but that we always must make such decisions without having certainty.

Employers requiring vaccination is more controversial. While some professions, such as healthcare workers and military personnel, are usually required to get vaccinated these are exceptions rather than the rule. Most professions, even those that involve working closely with other people, do not require vaccinations, even during a pandemic. There are also moral questions about what employers can compel their employees to do.

In general, the American right supports granting considerable power to employers over their employees. One example is at will employment which allows employers to fire employees at will. For example, if an employee refused to stop smoking (outside of the workplace) they could be fired. As another example, if an employee expresses political views on their own time that their boss dislikes, they can be fired. Given that the right generally supports employers having great power over their employers, one might think they  would accept that employers could mandate vaccination on the pain of being fired. While workers would be free to refuse, few can afford to quit their jobs and companies have great coercive power.

But the right has made vaccines part of their political war. While they would normally favor employers imposing what they wish on their employees, the anti-vaxxers on the right have opposed this mandate. They have shown that when corporations do not side with them in their manufactured culture war battles, they will turn against these businesses. This is presumably because they believe the political points they gain will outweigh a conflict with the corporations who help fund their re-elections.

While the right professes to be anti-vax because of their love of freedom, this is a bad faith claim. The right has been busy passing restrictive laws to “solve” problems that do not exist. For example, the right has been busy limiting access to voting based on their “big lie” about the 2020 election. If they cared about freedom, they would not be doing this. They have also been busy passing laws aimed at trans people, claiming that strict restrictions must be in place to protect people from (imaginary) dangers. Again, if they believed that freedom is so important, they would not be passing such laws. And if they really believed in protecting people from real harm, they would not be anti-vax.

The left generally favors workers’ rights and often seeks to at least slightly reduce the power disparity. As such, it would make sense for the left to generally hold that workers could refuse to be vaccinated without being fired. That said, the left also has concerns beyond the workplace, so some leftists might favor mandatory vaccination imposed by the state. This would typically be morally justified on utilitarian grounds: the state is supposed to use its coercive powers to protect citizens, and this could include requiring vaccination during the next pandemic.

My own view is, to state the obvious, that this issue is complicated. On the one hand, people have the moral right to control their bodies. This provides a moral foundation for arguing against vaccine mandates. On the other hand, all rights should be morally limited by the harm that might be done to others in exercising them. To use a silly example, I have the right to run as fast as I wish. But I do not have the right to charge into other people. This is because my actions could hurt them. As another example, while I do have the right to remove the brakes from my truck, I do not have the right to drive it on public roads after doing this. This is because I would hurt other people. In the case of the next pandemic, the harms would likely warrant mandatory vaccination just as people are required to have working brakes on their vehicles and forbidden from charging other people like deranged bulls.