As a runner, I have been accused of being a masochist or at least possessing masochistic tendencies. As I routinely subject myself to pain and my previous essay about running and freedom was pain focused, this is hardly surprising. Other runners, especially those masochistic ultra-marathon runners, are often accused of masochism.

In some cases, the accusation is not serious. Usually, people just observe that runners do things that both hurt and make little sense to nonrunners. However, some see runners as masochists in a strict sense. Being a runner and a philosopher, I find this interesting, especially when I am the one accused of being a masochist.

Some do accuse runners of being masochists with some seriousness. While some say runners are masochists in jest or with some respect for the toughness of runners, it is sometimes presented as an accusation: that there is something wrong with runners and running is deviant behavior. While runners do like to joke about being odd and different, we probably prefer to not be seen as mentally ill deviants. After all, that would indicate that we are doing something wrong—which I believe is (usually) not the case. Based on my experience and meeting thousands of runners, I think that runners are generally not masochists.

Given that runners engage in painful activities (such as speed work and racing marathons) and that they often run despite injuries, it is tempting to believe they are masochists and that I am in denial about our collective deviance.

While this does have some appeal, it rests on a confusion about masochism in terms of means and ends. For the masochist, pain is a means to the end of pleasure. The masochist does not seek pain for the sake of pain, but seeks pain to achieve pleasure. However, there is a special connection between the means of pain and the end of pleasure: for the masochist, the pleasure they desire is that which is generated specifically by pain. While a masochist can get pleasure by other means (such as drugs, cake or drug cakes), it is the desire for pleasure caused by pain that defines the masochist. So, the pain is not optional—mere pleasure is not the end, but pleasure caused by pain.

This is different from those who endure pain as part of achieving an end, be that end pleasure or some other end. For those who endure pain to achieve an end, the pain can be part of the means or, more accurately, as an effect of the means. It is valuing the end that causes the person to endure the pain to achieve the end—the pain is not sought out as being the “proper cause” of the end. In the case of the masochist, the pain is not endured to achieve an end—it is the “proper cause” of the end, which is pleasure.

In the case of running, runners usually see pain as something to be endured as part of the process of achieving their desired ends, such as fitness or victory. However, runners usually prefer to avoid pain when they can. For example, while I endure pain to run a race, I prefer running with as little pain as possible. This is like a person putting up with the unpleasant aspects of a job to make money—but they would prefer as little unpleasantness as possible. After all, she is in it for the money, not the unpleasant aspects of work. Likewise, a runner is typically running for some other end (or ends) than hurting herself.  It just so happens that achieving that end (or ends) requires doing things that cause pain.

In my essay on running and freedom, I described how I endured pain while running the Tallahassee Half Marathon. If I were a masochist, experiencing pleasure by means of that pain would have been my primary end. However, my primary end was to run the half marathon well and the pain was an obstacle to that end. As such, I would have been glad to have had a painless start and I was pleased when the pain diminished. I enjoy the running and I do enjoy overcoming pain, but I do not enjoy the pain itself—hence the aspirin in my medicine cabinet.

While I cannot speak for all runners, my experience is that runners do not run for pain, they run despite the pain. Thus, we are not masochists. We might, however, show some poor judgment when it comes to pain and injury—but that is another matter. But I would suggest to any masochists that they do take up running, as running is really good for a person.

One reason sometimes given to expand health care coverage is that if someone has health insurance, then they are less likely to use the emergency room for treatment. One reason for this is that someone with health insurance will be more likely to use primary care and less likely to need emergency room treatment. It also makes sense that a person with insurance would get more preventative care and be less likely to need a trip to the emergency room.

On the face of it, reducing emergency room visits would be good. One reason is that emergency room care is expensive and reducing it would save money. Another reason is that the emergency room should be for emergencies—reducing the number of visits can help free up resources and reduce waiting times.

So, extending insurance coverage would reduce emergency room visits and this is good. However, extending insurance might increase emergency room visits. In one seemingly credible study, insurance coverage resulted in more emergency room visits.

One obvious explanation is that the insured would be more likely to use medical services for the same reason that insured motorists are more likely to use the service of mechanics: they are more likely to be able to afford to pay the bills.

On the face of it, this does not seem bad. After all, if people can afford to go to the emergency room because they have insurance, that is better than having people suffer because they lack the means to pay. However, what is most interesting about the study is that the expansion of Medicaid coverage increased emergency room visits for treatments more suitable for a primary care environment. The increase in emergency use was significant—about 40%. The study was large enough that this is statistically significant.

Because of this, it is worth considering the impact of expanding coverage on emergency rooms. Especially if it is argued that expanding coverage would reduce costs by reducing emergency room visits.

One possibility is that the results from the Medicaid study would hold true in general, so that expansions of health care coverage would result in more emergency room visits. If an expansion of coverage results in significant increase in emergency room visits, this would not help reduce health care costs if people go to the more expensive emergency room rather than seeking primary care.

But an insurance expansion might not cause significantly more non-necessary emergency room visits. One reason is that private insurance companies seem to try to deter emergency room visits by imposing higher payments for patients. In contrast, Medicaid did not impose this higher cost. Thus, those with private insurance would tend to have a financial incentive to avoid the emergency room while those on Medicaid would not, unless there was an increased cost for the Medicaid patient. While it does seem wrong to impose a penalty for going to the emergency room, one method to channel patients towards non-emergency room treatment is to impose a financial penalty for emergency room visit for services that can be provided by primary care facilities. One moral concern with imposing such penalties is that some forms of care are only available through emergency rooms. For example, when I had to get rabies shots in 2023, the only option was the emergency room. But it could be replied that such treatments are unusual, hence the penalty would not affect many people.

People might use emergency rooms instead of primary care because they do not know their options. If so, if more people were better educated about medical options, they would be more likely to choose options other than the emergency room when they did not need emergency room services. Given that the emergency room is stressful and involves a long wait (especially for non-emergencies) people would probably elect for primary care when they know they have that option.  This is not to say education will be a cure-all, but it is likely to help reduce unnecessary emergency room visits. Which is certainly a worthwhile objective.

There are many self-help books, but they all suffer from one fatal flaw: they assume the solution to your problems lies in changing yourself. This is a clearly misguided approach for many reasons.

The first is the most obvious. As Aristotle’s principle of identity states, A=A. Or, put in words, “each thing is the same with itself and different from another.” As such, changing yourself is impossible: to change yourself, you would cease to be you. The new person might be better. And, let’s face it, probably would be. But it would not be you. As such, changing yourself would be ontological suicide and you do not want any part of that.

The second is less obvious but is totally historical. Parmenides of Elea, a very dead ancient Greek philosopher, showed that change is impossible. I know that “Parmenides” sounds like cheese, perhaps one that would be good on spaghetti. But, trust me, he was a philosopher and would make a poor pasta topping. Best of all, he laid out his view in poetic form, the most truthful of truth conveying word wording:

 

How could what is perish? How could it have come to be? For if it came into being, it is not; nor is it if ever it is going to be. Thus coming into being is extinguished, and destruction unknown.

 

Nor was [it] once, nor will [it] be, since [it] is, now, all together, / One, continuous; for what coming-to-be of it will you seek? / In what way, whence, did [it] grow? Neither from what-is-not shall I allow / You to say or think; for it is not to be said or thought / That [it] is not. And what need could have impelled it to grow / Later or sooner, if it began from nothing? Thus [it] must either be completely or not at all.

 

[What exists] is now, all at once, one and continuous… Nor is it divisible, since it is all alike; nor is there any more or less of it in one place which might prevent it from holding together, but all is full of what is.

 

And it is all one to me / Where I am to begin; for I shall return there again.

 

That, I think we can all agree, is completely obvious and utterly decisive. Since you cannot change, you cannot self-help yourself by changing. That is just good logic. I would say more, but I do not get paid by the word to write this stuff. I do not get paid at all.

But, obviously enough, you want to help yourself to a better life. Since you cannot change and it should be assumed with 100% confidence that you are not the problem, an alternative explanation for your woes is needed. Fortunately, the problem is obvious: other people. The solution is equally obvious: you need to get new people. Confucius said, “Refuse the friendship of all who are not like you.” This was close to the solution, but if you are annoying or a jerk, being friends with annoying jerks is not going to help you. A better solution is to tweak Confucius just a bit: “Refuse the friendship of all who do not like you.” This is a good start, but more is needed. After all, it is obvious that you should just be around people who like you. But that will not be totally validating.

The goal is, of course, to achieve a Total Validation Experience (TVE). A TVE is an experience that fully affirms and validates whatever you feel needs to be validated at the time. It might be your opinion about Mexicans or your belief that your beauty rivals that of Adonis and Helen. Or it might be that your character build in World of Warcraft is fully and truly optimized.

By following this simple dictate “Refuse the friendship of all who do not totally validate you”, you will achieve the goal that you will never achieve with any self-help book: a vast ego, a completely unshakeable belief that you are right about everything, and all that is good in life. You will never be challenged and never feel doubt. It will truly be the best of all possible worlds. So, get to work on surrounding yourself with Validators.  What could go wrong? Nothing. Nothing at all.

A Philosopher’s Blog 2025 brings together a year of sharp, accessible, and often provocative reflections on the moral, political, cultural, and technological challenges of contemporary life. Written by philosopher Michael LaBossiere, these essays move fluidly from the ethics of AI to the culture wars, from conspiracy theories to Dungeons & Dragons, from public policy to personal agency — always with clarity, humor, and a commitment to critical thinking.

Across hundreds of entries, LaBossiere examines the issues shaping our world:

  • AI, technology, and the future of humanity — from mind‑uploading to exoskeletons, deepfakes, and the fate of higher education
  • Politics, power, and public life — including voting rights, inequality, propaganda, and the shifting landscape of American democracy
  • Ethics in everyday life — guns, healthcare, charity, masculinity, inheritance, and the moral puzzles hidden in ordinary choices
  • Culture, identity, and conflict — racism, gender, religion, free speech, and the strange logic of modern outrage
  • Philosophy in unexpected places — video games, D&D, superheroes, time travel, and the metaphysics of fictional worlds

Whether he is dissecting the rhetoric of conspiracy theories, exploring the ethics of space mining, or reflecting on the death of a beloved dog, LaBossiere invites readers into a conversation that is rigorous without being rigid, principled without being preachy, and always grounded in the belief that philosophy is for everyone.

This collection is for readers who want more than hot takes — who want to understand how arguments work, why beliefs matter, and how to think more clearly in a world that rewards confusion.

Thoughtful, wide‑ranging, and often darkly funny, A Philosopher’s Blog 2025 is a companion for anyone trying to make sense of the twenty‑first century.

 

Available for $2.99 on Amazon

 

 

 

It is estimated that almost 30% of humans are overweight or obese and this is likely to increase. Given this large number of large people, it is not surprising that moral and legal issues have arisen regarding their accommodation. It is also not surprising that people arguing in favor of accommodation contend that obesity is a disability. The legal issues are, of course, are matter of law and are settled by lawsuits. Since I am not a lawyer, I will focus on ethics and will address two main issues. The first is whether obesity is a disability. The second is whether obesity is a disability that morally justifies making accommodations.

On the face of it, obesity is disabling. That is, a person who is obese will have reduced capabilities relative to a person who is not obese. An obese person will tend to have lower endurance than a non-obese person, less speed, less mobility, less flexibility and so on. An obese person will also tend to suffer from more health issues and be at greater risk for some illnesses. Because of this, an obese person might find it difficult or impossible to perform certain job tasks, such as those involving strenuous physical activity or walking relatively long distances.

 The larger size and weight of obese individuals also presents challenges when they deal with chairs, doors, equipment, clothing and vehicles. For example, an obese person might be unable to operate a forklift with the standard seating and safety belt. As another example, an obese person might not be able to fit in one airline seat and instead require two (or more).  As a third example, an obese student might not be able to fit into a standard classroom desk. As such, obesity could make it difficult or impossible for a person to work or make use of certain goods and services.   

Obviously enough, obese people are not the only ones who have disabilities. There are people with short term disabilities due to illness or injury. I experienced this myself when I had a complete quadriceps tendon tear. My left leg was locked in an immobilizer for weeks, then all but useless for months. With this injury, I was much slower, had difficulty with stairs, could not carry heavy loads, and could not drive. This experience opened my eyes to the challenges of navigating a world not designed to accommodate people.

There are also people who have long term or permanent disabilities, such as people who are paralyzed, blind, or are missing limbs due to accidents or war. These people can face great challenges in performing tasks at work and in life. For example, a person who is permanently confined to a wheelchair due to a spinal injury will find navigating stairs or working in the woods rather challenging.

In general, it seems morally right to require employees, businesses, schools and so on to make reasonable accommodations for people who have disabilities. The principle that justifies that is equal treatment: people should be afforded equal access, even when doing so requires some additional accommodation. As such, while having ramps in addition to stairs costs more, it is a reasonable requirement given that some people cannot fully use their legs.  Given that the obese are disabled, it is reasonable to conclude they should be accommodated just as the blind and paralyzed are accommodated.

Naturally, it could be argued that there is no moral obligation to provide accommodation for anyone. If this is the case, then there would be no obligation to accommodate the obese. However, it would seem to be rather difficult to prove, for example, that veterans with disabilities returning to school should just have to fight their way up the steps in their wheelchairs. For the sake of the discussion to follow I will assume that there is a moral obligation to accommodate the disabled. However, there is still the question of whether this should apply to the obese.

One obvious way to argue against accommodations for the obese is to argue that there is a morally relevant difference between those disabled by obesity and those disabled by injury, birth defects, etc. One difference that people often point out is that obesity is claimed to be a choice and other disabilities are not. That is, a person’s decisions result in their being fat and hence they are responsible in a way a person who is disabled by an accident or war is not.

It could be pointed out that some people who are disabled by injury were disabled as the result of their decisions. For example, a person might have driven while drunk and ended up paralyzed. But, of course, the person would not be denied access to handicapped parking or the use of automatic doors because their disability was self-inflicted. The same reasoning could be used for the obese: even if their disability self-inflicted, it is still a disability and thus should be accommodated.

A reply to this is that there is still a relevant difference. While a person who loses the ability to use their legs in a self-inflicted drunken crash caused their own disability, there is little they can do about that disability. They can change their diet and exercise, but this will not restore functionality to their legs. That is, they are permanently stuck with the results of that decision. In contrast, an obese person must maintain their obesity. While some people are genetically predisposed to being obese, how much a person eats and exercises is a matter of choice. Since they could reduce their weight, the rest of us are under no obligation to provide special accommodation for them. This is because they could take reasonable steps to remove the need for such accommodation. To use analogy, imagine someone who insisted that they be provided with a Seeing Eye dog because she wants to wear opaque glasses all the time. These glasses would result in their being disabled since they would be effectively blind. However, since they can just remove the glasses, there is no obligation to provide them with the dog. In contrast, a person who is blind cannot just get new eyes and hence it is reasonable for society to accommodate them. 

It can be argued that obesity is not a matter of choice. One approach would be to argue for metaphysical determinism: the obese are obese by necessity and could not be otherwise. The easy reply here would be to say that we are, sadly enough, metaphysically determined not to provide accommodation.

A more sensible approach would be to argue that obesity is, in some cases, a medical condition that is beyond the ability of a person to control. The most likely avenue of support for this claim would come from neuroscience. If it can be shown that people are incapable of controlling their weight, then obesity would be a true disability, on par with having one’s arm blasted off by an IED or being born with a degenerative neural disorder.

It could also be argued that a person does have some choice, but that acting on the choice would be so difficult that it is more reasonable for society to accommodate the individual than it is for the individual to struggle not to be obese. To use an analogy, a person with a disability might be able to regain enough functionality to operate in a “mostly normal” way, but doing so might require agonizing effort beyond what could be expected of a person. In such a case, one would surely not begrudge the person the accommodations. So, it could be argued that since it is easier for society to accommodate the obese than it is for the obese to not be obese, society should do so.

There is, however, a legitimate concern here. If the principle is adopted so that society must accommodate the obese because they have a disability and they cannot help their obesity, then others could appeal to that same sort of principle and perhaps over-extend the realm of disabilities that must be accommodated. For example, people who are addicted to drugs could make a similar argument: they are disabled, yet their addiction is not a matter of choice. As another example, people who are irresponsible can claim they are disabled as well and should be accommodated on the grounds that they cannot be other than they are. But  it is likely that boundaries  can be drawn in a principled way so that the obese have a disability, but the irresponsible do not.

The United States, like all societies, suffers from many ills. This includes such things as mental illness, homelessness and drug addiction. There are many ways that these problems could be addressed. Unfortunately, the usual approach has been to try to “solve” them by law enforcement and criminalization. I will briefly consider the failures of this approach in these cases.

In the 1980s there was a major shift in America’s approach to mental illness: in the name of fiscal savings, the mentally ill were released from the hospitals into the community. One major impact of this was an increase in the number of homeless people. 20-25% of the homeless suffer from severe mental illness, compared to 6% of the entire population. The mentally ill who are homeless, as one might suspect, are usually not treated. People with untreated severe mental illnesses often behave in ways that the public finds problematic, which often leads to their being arrested and imprisoned. Prisons are ill-equipped to deal with the mentally ill and mainly serve to warehouse such people until they serve their sentences. Having a criminal record makes matters worse and makes it more likely they will simply be returned to prison and remain untreated, thus creating a cruel cycle which offers little chance for escape.

The criminalization of mental illness has not solved the problem and has made it worse. As such, it is a failure if the goal was to help people. It has not helped treat people and the cost of operating mental health institutions has been replaced with the cost of maintaining prisons. There are those who profit from this system; but it costs society dearly.

It is also a moral failure. On utilitarian grounds, it is morally wrong because it has increased rather than decreased unhappiness. For moral systems that focus on obligations to the wellbeing of others (such as the version of Christianity that embraces the parable of the good Samaritan), this approach is also a moral failure. As such, criminalizing mental illness has proven a resounding failure.

While mental illness leads many to the streets, America’s economic system also makes people homeless. Many unhoused people end up that way due to being bankrupted by medical expenses. Since the homeless have no homes, they tend to live and sleep in public areas. As would be expected, the presence of the homeless in such areas is seen as a problem and some cities try to “solve” the problem by criminalizing such things as lying down or camping in public areas. The ordinances typically impose fines, but since the homeless generally cannot afford to pay fines they usually end up in the criminal justice system which is often a pathway to prison. A criminal record only makes matters worse for the homeless and increases the chance they will remain homeless. This means that they are likely to be arrested again for breaking the ordinances that target the homeless, thus creating another cruel circle.

As might be suspected, this approach to homelessness comes with a significant monetary cost. For example, Denver spent over $750,000 enforcing its homeless targeting ordinances. Other cities pay comparable costs, making the criminalization of homelessness costly to everyone. There have been some efforts to address homelessness through other means, such as providing affordable housing, but dealing with the underlying causes is challenging given the values of those making the decisions.

Once again, trying to solve a problem through criminalization proves to be a terrible approach if the goal is to solve a social problem. Even on the heartless grounds of saving money, it fails as the cost of policing the homeless seems to consume whatever savings might be accrued for pushing people into the streets. This, of course, could be countered. One might be able to show that the monetary cost of strategies aimed at getting the homeless into homes would exceed the cost of policing the homeless on the streets. After all, the politicians could lower the cost significantly simply by not policing the unhoused who do not commit serious crimes, such as robbery. This, however, does still leave the homeless without homes and this can impose other economic costs, such medical expenses paid for by the public. This could be countered by arguing that the homeless should be completely abandoned as this would yield financial savings.

Such abandonment does, however, run into a moral challenge. The harm suffered by the homeless (and society) would seem to make a compelling utilitarian moral argument in favor of approaches that aim at getting the homeless back into society. Moral views that accept that people should love one another also enjoin us to not abandon our fellows. In any case, criminalizing homelessness is no solution, financial or moral.

Drug addiction is another problem that has largely been addressed by criminalization. About half of the people in federal prisons and 16% of those in state prison are there for drug offenses. This is the result of the war on drugs, which endeavored to solve the drug problem by arresting our way out of it. Since the negative consequences of this approach fell mainly on minorities and the poor, there was little interest among politicians in taking a different approach. However, as prison populations swelled and public attitudes towards drug use changed, there was some talk of reconsidering this war. The biggest change in the public discussion arose from the opioid epidemic, a drug epidemic that goes beyond ravaging the poor and minorities to impacting the white middle class.  This has resulted in some changes in the approach to the problem, such as the police offering free treatment for drug users rather than arresting them. It does remain to be seen if these changes will be lasting and widespread. However, this is certainly a positive change to a failed approach to the health issue of drugs.

While some for-profit prisons have done well for their shareholders in the war on drugs, the financial cost to society has been substantial. Criminalization of addiction has also failed to reduce addiction. As such, this approach has proved to be a practical failure, unless you are a shareholder in a private prison or otherwise profit from this situation.

As above, there are also moral concerns about this approach in terms of the harms being inflicted on individuals and society as a whole. Fortunately, there is a chance that America will rethink the war on drugs (in which we are the enemy) and recast it as a health issue. This not only has the potential to be far more of a practical success; it also would seem to be the right thing to do morally. Transforming people in need into criminals cannot solve the ills of society, addressing those needs can.

Back in 2016 Martin Shkreli became the villain of drug pricing when he increased the price of a $13.50 pill to $750. While buying up smaller drug companies and increasing prices products is a standard profit-making venture, the scale of the increase and Shkreli’s attitude drew attention to this incident. Unfortunately, while the Shkreli episode briefly caught the public’s attention, drug pricing is an ongoing problem.

For consumer, the main problem is that drugs are priced extremely high, sometimes high enough to bankrupt patients. In the face of public criticism, drug companies attempt to justify the high prices. One reason they give is that they need to charge these prices to pay the R&D costs of the drugs. While a company does have the right to pass on the cost of drug development, the facts tell another story about the pricing of drugs.

First, about 38% of the basic research science was funded by taxpayer money.  Thus, the public was paying twice: once in taxes and again for the drugs. This, of course, leaves a significant legitimate area of expenses for companies, but hardly enough to warrant absurdly high prices. As the federal budget for this research is cut, companies will be able to make a better argument based on the cost of research as they will need to spend more of their profits for research.

Second, most large drug companies spend almost twice as much on promotion and marketing as they do on R&D. While these are legitimate business expenses, this undercut using R&D expenses to justify excessive drug prices. Saying that pills are expensive because of the cost of marketing pills would not be a very effective strategy. There is also the issue of the ethics of advertising drugs, which is another matter entirely.

Third, many “new” drugs are just slightly modified old drugs. Common examples including combining two older drugs to create a “new” drug, changing the delivery method (from an injectable to a pill, for example) or altering the release time. In many cases, the government will grant a new patent for these minor tweaks, and this will grant the company up to a 20-year monopoly on the product, preventing competition. This practice, though obviously legal, is sketchy. To use an analogy, imagine a company holding the patents on a wheel and on an axle. Then, when those patents expired, they patented wheel + axle as a “new” invention. That would be absurd.

Companies also try other approaches to justify the high cost, such as arguing that the drugs treat serious conditions or can save money by avoiding a more expensive treatment. While these arguments do have some appeal, it is morally problematic to argue that the price of a drug should be based on the seriousness of the condition it treats. This seems like a protection scheme or coercion amounting to “pay what we want, or you die.” The money-saving argument is less odious but is still problematic. By this logic, car companies should be able to much more for safety features since they protect people from expensive injuries. It is, of course, reasonable to make a profit on products that provide significant benefits, but there need to be moral limits to the profits.

The obvious counter to my approach is to argue that drug prices should be set by the free market: if people are willing to pay large sums for drugs, then the drug companies should be free to charge those prices. After all, companies like Apple and Porsche sell expensive products without (generally) being demonized for making profits.

The easy response is that luxury cars and Macbooks are optional luxuries that a person can easily do without and there are many cheaper (and better) alternatives. However, drug companies sell drugs that are necessary for a person’s health and even survival. They are usually not optional products. There is also the fact that drug companies enjoy patent protection that precludes effective competition. While Apple does hold patents on its devices, there are many competitors. For example, if you don’t want to pay a premium for an Apple computer, you have your pick of thousands of options. But, if you need certain medications, your options can be much more limited.  

While defenders of drug prices laud the free market and decry “government interference”, their ability to charge high prices depends on the “interference” of the state. As noted above, the United States and other governments issue patents to drug companies that grant them exclusive ownership. Without this protection, a company that wanted to charge $750 for a $13.50 pill would find competitors rushing to sell the pill for far less. After all, it would be easy enough for a competitor to analyze a drug and produce it. By accepting the patent system, the drug companies accept that the state has a right to engage in legal regulation in the drug industry, to replace the invisible hand with a very visible hand of the state. Once this is accepted, the door is opened to allowing additional regulation on the grounds that the state will provide protection for the company’s property using taxpayer money in return for the company agreeing not to engage in harmful pricing of drugs. Roughly put, if the drug companies expect people to obey the social contract with the state, they also need to operate within the social contract. Companies could, of course, push for a truly free market: they would be free to charge whatever they want for drugs without state interference, but there would be no state interference into the free market activities of their competitors when they duplicate the high price drugs and start undercutting the prices. But, as always, companies want a free market when freedom benefits them and a nanny state when it benefits them.

In closing, if the drug companies want to keep the patent protection they need for high drug prices, they must be willing to operate within the social contract. After all, citizens should not be imposed upon to fund the protection of the people who are, some might claim, robbing them.

 

Modern agriculture deserves praise for the good it does. Food is plentiful, relatively cheap and easy to acquire. Instead of having to struggle with raising crops and livestock or hunting and gathering, many Americans can go to the grocery store and get the food we need to stay alive. However, as with all things, there is a price.

The modern agricultural complex is highly centralized and industrialized, which has advantages and disadvantages. There are also the harms of practices aimed at maximizing profits. While there are many ways to maximize profits, two common ones are to pay the lowest wages possible and to shift costs to others. I will look, briefly, at one area of cost shifting: the widespread use of antibiotics in meat production.

While most think of antibiotics as a means of treating diseases, healthy food animals are routinely given antibiotics. One reason is to prevent infections: factory farming techniques, as might be imagined, vastly increase the chances of a disease spreading. Antibiotics, it is claimed, can help reduce the risk of bacterial infections (antibiotics are useless against viruses). A second reason is that antibiotics increase the growth rate of healthy animals, allowing them to pack on more meat in less time and time is money. These uses allow the industry to continue factory farming and maintain high productivity, which initially seems laudable. The problem is, however, the use of antibiotics comes with a high price that is paid for by everyone else.

Eric Schlosser wrote “A Safer Food Future, Now”, which appeared in the May 2016 issue of Consumer Reports. In this article, he noted that this practice has contributed significantly to the rise of antibiotic resistant bacteria. Each year, about two million Americans are infected with resistant strains and about 20,000 people die. The healthcare cost is about $20 billion. To be fair, the agricultural industry is not the only contributor to this problem: improper use of antibiotics in humans has also added to this problem. That said, the agricultural use of antibiotics accounts for about 75% of all antibiotic usage in the United States, thus converting the factory farms into farms for resistant bacteria.

The harmful consequences of this antibiotic use have been known for years and there have been attempts to address this through legislation. It is no surprise that our elected leaders have failed to act. One likely explanation is the lobbying power of corporations. In the United States, both parties prioritize profits over the people. But it could be contended that lawmakers are ignorant of the harms, doubt there are harms from antibiotics or honestly believe that the harms arising are outweighed by the benefits. That is, the lawmakers have credible reasons other than the money they are paid to do the will of the wealthy. This is a factual matter, but no professional politician who has been swayed by lobbying will attribute her decision to anything other than good intentions.

This matter is one of ethical concern and, like most large-scale ethical matters involving competing interests, is one best approached by utilitarian considerations. On the side of using antibiotics, there is the increased productivity (and profits) of the factory farming system. This allows more and cheaper food to be provided for the population, which can be regarded as pluses. The main reasons to not use the antibiotics, as noted above, are that they contribute to the creation of antibiotic-resistant strains that sicken and kill people. This imposes costs on those who are sickened and killed as well as those who care about them. There are also the monetary costs in the health care system (although the increased revenue can be tagged as a plus for health care providers). In addition to these costs, there are also other social and economic costs, such as lost hours of work. As this indicates, the cost (illness, death, etc.) of the use of the antibiotics is shifted: the industry does not pay these costs, they are paid by everyone else. Including other industries.

Using a utilitarian calculation requires weighing the cost to the general population against the profits of the industry and the claimed benefits to the general population. Put roughly, the moral question is whether the improved profits and greater food production outweigh the illness, deaths and costs suffered by the public. Most politicians seem to believe that the answer is “yes.”

If the United States were in a food crisis in which the absence of the increased productivity afforded by antibiotics would cause more suffering and death than their presence, then their use would be morally acceptable. However, this does not seem to be the case. While banning this sort of antibiotic use would decrease productivity (and impact profits), the harm of doing this would seem to be vastly exceeded by the reduction in illness, deaths and health care costs. However, if an objective assessment of the matter showed that the ban on antibiotics would not create more benefits than harm, then it would be reasonable and morally acceptable to continue to use them. This is partially a matter of value (in terms of how the harms and benefits are weighted) and partially an objective matter (in terms of monetary and health costs). I am inclined to agree that the general harm of using the antibiotics exceeds the general benefits, but I could be convinced otherwise by objective data.

 

All professions have their problem members, and the field of medicine is no exception. Fortunately, the percentage of bad doctors is low—but this small percentage can do considerable harm. After all, when your professor is incompetent, you might not learn as much as you should. If your doctor is incompetent, they could kill you.

Back in 2016 Consumer Reports published an article by Rachel Rabkin Peachman covering bad doctors and the difficulty patients face in learning whether a physician is a good doctor or a disaster. Unfortunately, not much has changed since then.

There are three main problems. The first is that there are bad doctors. The article presented numerous examples to add color to the dry statistics, and this includes such tales of terror as doctors molesting patients, doctors removing healthy body parts, and patient deaths due to negligence, impairment or incompetence. These are obvious all moral and professional failings on the part of the doctors, and they should clearly not be engaged in such misdeeds. For more recent examples, John Oliver provides disturbing coverage of the dangers presented by med spas.

The second is that, according to Peachman, the disciplinary actions tend to be rather less than ideal. While doctors should enjoy the protection of a due process, the hurdles are, perhaps, too high. There is also the problem that the responses are often very mild. For example, a doctor whose negligence has resulted in the death of patients can be allowed to keep practicing with minor limitations. As another example, a doctor who has been engaged in sexual misconduct might continue practicing after a class on ethics and with the requirement that someone else be present when he is seeing patients. In addition to the practical concerns about this, there is also the moral concern that the disciplinary boards are failing to protect patients.

One possible argument against harsher punishments is that there is always a shortage of doctors and taking a doctor out of practice would have worse consequences than allowing a bad doctor to keep practicing. This would be the basis for a utilitarian argument for continuing mild punishments. Crudely put, it is better to have a doctor who might kill a patient or two than no doctor at all because that would result in many more deaths.

This argument does have some appeal. However, there is the factual question of whether the mild punishments do more good than harm. If they do, then one would need to accept that this approach is morally tolerable. If not, then the argument would fail. There is also the response that consequences are not what matters and people should be reprimanded based on their misdeeds and not based on some calculation of utility. This also has some intuitive appeal.

It could also be argued that it should be left to patients to judge if they want to take the risk. If a doctor is known for sexual misdeeds with female patients but is fine with male patients, then a man who has few or no other options might decide that the doctor is his best choice. This leads to the third problem.

The third problem is that it is very difficult for patients to learn about bad doctors. While there is a National Practitioner Data Bank (NPDB), it is off limits to patients and is limited to law enforcement, hospital administration, insurance and a few other groups.

The main argument against allowing public access to the NPDB is based on the premise that it contains inaccurate information which could be harmful to innocent doctors. This makes it similar to the credit report data which is notorious for containing harmful inaccuracies that can plague people.

While the possibility of incorrect data is a matter of concern, that premise best supports the conclusion that the NPDB should be reviewed regularly to ensure that the information is accurate. While perfect accuracy is not possible, surely the information can meet a reasonable standard of accuracy. This could be aided by providing robust tools for doctors to inform those running the NPDB of errors and to inform doctors about the content of their files. As such, the error argument is easily defeated.

Patients do have some access to data about doctors, but there are many barriers in place. In some cases, there is a financial cost to access data. In almost all cases, the patient will need to grind through lengthy documents and penetrate the code of legal language. There is also the fact that this data is often incomplete and inaccurate.  While it could be argued that a responsible patient would expend the resources needed to research a doctor, this is an unreasonable request, and a patient should not need to do all this just to know that the doctor is competent. One reason for this is that someone seeking a doctor is likely to be sick or injured and expecting them to add on the burden of a research project is unreasonable. Also, a legitimate role of the state is to protect citizens from harm and having a clear means of identifying bad doctors would seem to fall within this.

Given the above, it seems reasonable to accept that a patient has the right to know about her doctor’s competence and should have an easy means of acquiring accurate information. This enables a patient to make an informed choice about her physician without facing an undue burden. This will also help the profession as good doctors will attract more patients and bad doctors will have a greater incentive to improve their practice.

As mentioned in my previous essay, Isis (my Siberian husky) fell victim to the ravages of time. Once a sprinting blur of fur, she was reduced to sauntering. Still, lesser beasts feared her (and to a husky, all creatures are lesser beasts) and the sun was warm in the backyard, so her life was good even at the end.  

Faced with the challenge of keeping her healthy and happy, I relied a great deal on what I learned as a philosopher. As noted in the preceding essay, my philosophical skills kept me from falling victim to the post hoc fallacy and the fallacy of anecdotal evidence. In this essay I will focus on two basic, but extremely useful methods of causal reasoning.

One of the most useful tools for causal reasoning is the method of difference. This method was famously developed by the philosopher John Stuart Mill and has been a staple in critical thinking classes since before my time. The purpose of the method is figuring out the cause of an effect, such as a husky suffering from a knuckling paw (a paw that folds over, so the dog is walking on the top of the foot rather than the bottom). The method can also be used to try to sort out the effect of a suspected cause, such as the efficacy of an herbal supplement in treating canine arthritis.

Fortunately, the method is simple. To use it, you need at least two cases: one in which the effect has occurred and one in which it has not. In terms of working out the cause, more cases are better, although more cases of something bad (like arthritis pain) would be undesirable from other standpoints. The two cases can involve the same individual at different times as it need not be different individuals (though it also works in those cases as well). For example, when sorting out Isis’ knuckling problem the case in which the effect occurred was when Isis was suffering from knuckling and the case in which it did not was when Isis was not suffering from this problem. I also investigated other cases in which dogs suffered from knuckling issues and when they did not.

The cases in which the effect is present and those in which it is absent are then compared to determine the difference between the cases. The goal is to sort out which factor or factors made the difference. When doing this, it is important to keep in mind that it is easy to fall victim to the post hoc fallacy and conclude without adequate evidence that a difference is a cause because the effect occurred after that difference. Avoiding this mistake requires considering that the “connection” between the suspected cause and the effect might be a coincidence. For example, Isis ate some peanut butter the day she started knuckling, but it is unlikely that had any effect, especially since she ate peanut butter since we became a pack. It is also important to consider that an alleged cause might be an effect caused by a factor that is also producing the effect one is concerned about. For example, a person might think that a dog’s limping is causing knuckling, but they might both be effects of a third factor, such as arthritis or nerve damage.

You must also keep in mind the possibility of reversed causation, which is when the alleged cause is the effect. For example, a person might think that limping is causing knuckling, but it might turn out that the knuckling is the cause of the limping.

In some cases, sorting out the cause can be easy. For example, if a dog slips and falls, then has trouble walking, the most likely cause is the fall. But it could still be something else. In other cases, sorting out the cause can be difficult. It might be because there are many possible causal factors. For example, knuckling can be caused by many things (even Lyme disease). It might also be because there are no clear differences (such as when a dog starts limping with no clear preceding event). One useful approach is to do research using reliable sources. Another, which is a good idea with pet problems, is to refer to an expert, such as a vet. Medical tests, for example, are useful for sorting out the differences and finding a likely cause.

The same basic method can also be used in reverse, such as determining the effectiveness of a dietary supplement for treating canine arthritis. For example, when Isis started slowing down and showing signs of soreness, I started giving her senior dog food, glucosamine and extra protein. What followed was an improvement in her mobility and the absence of soreness. While the change might have been a mere coincidence, it is reasonable to consider that one or more of these factors helped her. After all, there is some scientific evidence that diet can have an influence on these things. From a practical standpoint, I decided to keep to this plan since the cost of the extras is low, they have no harmful side effects, and there is some indication that they work. I did consider that I could be wrong. Fortunately, I did have good evidence that the steroids Isis was prescribed worked as she made a remarkable improvement after starting them and there is solid scientific evidence that they are effective at treating pain and inflammation. As such, it is rational to accept that the steroids were the cause of her improvement, though this could also be a coincidence.

The second method is the method of agreement. Like difference, this requires at least two cases. Unlike difference, the effect is present in all cases. In this method, the cases exhibiting the effect (such as knuckling) are considered to find a common thread in all the cases. For example, each incident of knuckling would be examined to determine what they all have in common. The common factor (or factors) that is the most plausible cause of the effect is what should be taken as the likely cause. As with the method of difference, it is important to consider such factors as coincidence to avoid falling into a post hoc fallacy.

The method of agreement is most often used to form a hypothesis about a likely cause. The next step is, if possible, to apply the method of difference by comparing similar cases in which the effect did not occur. Roughly put, the approach would be to ask what all the cases have in common, then determine if that common factor is absent in cases in which the effect is also absent. For example, a person investigating knuckling might begin by considering what all the knuckling cases have in common and then see if that common factor is absent in cases in which knuckling did not occur.

One of the main weaknesses of these methods is that they tend to have very small sample sizes, sometimes just one individual, such as my husky. While these methods are quite useful, they can be supplemented by general causal reasoning in the form of experiments and studies, which is the subject of the next essay in this series.