Because of my work on metaphysical free will, it is hardly a shock that I am interested in whether sexual orientation is a choice. One problem with this issue is it seems impossible to prove (or disprove) the existence of free will in this, or any, context. As Kant argued, free will seems to lie beyond the reach of our knowledge. As such, it cannot be said with certainty that a person’s sexual orientation is a matter of choice. But this is nothing special: the same can be said about the person’s political party, religion, hobbies and so on.

Laying aside metaphysical speculation, it can be assumed (or pretended) that people do have a choice in some matters. Given this assumption, the question would seem to be whether sexual orientation is in the category of things that can be reasonably assumed to be matters of choice.

On the face of it, sexual orientation is within the domain of what a person finds sexually appealing and attractive. This falls within a larger set of what a person finds appealing and attractive in general.

Thanks to science, it seems reasonable to believe that some of what people find appealing and attractive has a foundation in our neural hardwiring rather than in choice. For example, humans find symmetrical faces more attractive than non-symmetrical faces and this does not seem to be a preference we choose. Folks who like the theory of evolution often claim that this preference exists because those with symmetrical faces are often healthier and hence better “choices” for reproductive activities.  

Food preferences also involve some hard wiring: humans like salty, fatty and sweet foods and the usual explanation also ties into evolution. For example, sweet foods are high calorie foods but are rare in nature, hence our ancestors who really liked sweets did better at surviving than those who did not really like sweets. Or some such story of survival of the sweetest.

Assuming such hardwired preferences, it makes sense that sexual preferences also involve at least some hardwiring. So, for example, a person might be hardwired to prefer light hair over dark hair.  Then again, the preference might be based on experience—the person might have had positive experiences with those with light hair and thus was conditioned to have that preference. The challenge is, of course, to sort out the causal role of hard wiring from the causal role of experience (including socialization). What is left over might be what could be described as choice.

In the case of sexual orientation, it seems reasonable to have some doubts about experience being the primary factor. After all, homosexual behavior has often been condemned, discouraged and punished. As such, it seems less likely that people would be socialized into being homosexual—especially in places where being homosexual is punishable by death. However, this is not impossible—perhaps people could be somehow socialized into being gay by all the social efforts to make them be straight.

Hardwiring for sexual orientation does seem plausible. This is mainly because there seems to be a lack of evidence that homosexuality is chosen. Assuming that the options are choice, nature or nurture, then eliminating choice and nurture would leave nature. But, of course, this could be a false trilemma as there might be other options.

It can be objected that people do choose homosexual behavior and thus being homosexual is a choice. While this does have some appeal, it is important to distinguish between a person’s orientation and what the person chooses to do. A person might be heterosexual and choose to engage in homosexual activity for practical reasons or curiosity. A homosexual might act like a heterosexual to avoid being killed. However, these choices would not change their orientation. As such, my view is that while behavior can be chosen, orientation is probably not.

As a runner, I have been accused of being a masochist or at least possessing masochistic tendencies. As I routinely subject myself to pain and my previous essay about running and freedom was pain focused, this is hardly surprising. Other runners, especially those masochistic ultra-marathon runners, are often accused of masochism.

In some cases, the accusation is not serious. Usually, people just observe that runners do things that both hurt and make little sense to nonrunners. However, some see runners as masochists in a strict sense. Being a runner and a philosopher, I find this interesting, especially when I am the one accused of being a masochist.

Some do accuse runners of being masochists with some seriousness. While some say runners are masochists in jest or with some respect for the toughness of runners, it is sometimes presented as an accusation: that there is something wrong with runners and running is deviant behavior. While runners do like to joke about being odd and different, we probably prefer to not be seen as mentally ill deviants. After all, that would indicate that we are doing something wrong—which I believe is (usually) not the case. Based on my experience and meeting thousands of runners, I think that runners are generally not masochists.

Given that runners engage in painful activities (such as speed work and racing marathons) and that they often run despite injuries, it is tempting to believe they are masochists and that I am in denial about our collective deviance.

While this does have some appeal, it rests on a confusion about masochism in terms of means and ends. For the masochist, pain is a means to the end of pleasure. The masochist does not seek pain for the sake of pain, but seeks pain to achieve pleasure. However, there is a special connection between the means of pain and the end of pleasure: for the masochist, the pleasure they desire is that which is generated specifically by pain. While a masochist can get pleasure by other means (such as drugs, cake or drug cakes), it is the desire for pleasure caused by pain that defines the masochist. So, the pain is not optional—mere pleasure is not the end, but pleasure caused by pain.

This is different from those who endure pain as part of achieving an end, be that end pleasure or some other end. For those who endure pain to achieve an end, the pain can be part of the means or, more accurately, as an effect of the means. It is valuing the end that causes the person to endure the pain to achieve the end—the pain is not sought out as being the “proper cause” of the end. In the case of the masochist, the pain is not endured to achieve an end—it is the “proper cause” of the end, which is pleasure.

In the case of running, runners usually see pain as something to be endured as part of the process of achieving their desired ends, such as fitness or victory. However, runners usually prefer to avoid pain when they can. For example, while I endure pain to run a race, I prefer running with as little pain as possible. This is like a person putting up with the unpleasant aspects of a job to make money—but they would prefer as little unpleasantness as possible. After all, she is in it for the money, not the unpleasant aspects of work. Likewise, a runner is typically running for some other end (or ends) than hurting herself.  It just so happens that achieving that end (or ends) requires doing things that cause pain.

In my essay on running and freedom, I described how I endured pain while running the Tallahassee Half Marathon. If I were a masochist, experiencing pleasure by means of that pain would have been my primary end. However, my primary end was to run the half marathon well and the pain was an obstacle to that end. As such, I would have been glad to have had a painless start and I was pleased when the pain diminished. I enjoy the running and I do enjoy overcoming pain, but I do not enjoy the pain itself—hence the aspirin in my medicine cabinet.

While I cannot speak for all runners, my experience is that runners do not run for pain, they run despite the pain. Thus, we are not masochists. We might, however, show some poor judgment when it comes to pain and injury—but that is another matter. But I would suggest to any masochists that they do take up running, as running is really good for a person.

A few years ago, I was doing my pre-race day run and, for no apparent reason, my left leg began to hurt. I made my way home, estimating the odds of a recovery by the next day. On the morning of the race, my leg felt better and my short pre-race run went well. Just before the start, I was optimistic: it seemed my leg would be fine. Then the race started. Then the pain started.

I hobbled forward and “accelerated” to an 8:30 per minute mile (the downside of a GPS watch is that I cannot lie to myself). The beast of pain grew strong and tore at my will. Behind that armor, my fear and doubt hid—urging me to drop out with whispered pleas. At that moment of weakness, I considered doing the unthinkable: hobbling to the curb and leaving the race.

From the inside this seemed a paradigm example of freedom of the will: I could elect to push through the pain, or I could take the curb. It was all up to me. While I was once pulled from a race because of injuries, at that time I had never left one by choice—and I decided that this would not be my first. I kept going and the pain got worse.

At this point in the race, I considered that my pride was pushing me to destruction or at least a fall. Fortunately, decades of running had trained me in pain assessment: like most veteran runners I am good at distinguishing between what merely hurts and what is causing significant damage. Carefully considering the nature of the pain and the condition of my leg, I judged that it was mere pain. While I could have decided to stop, I decided to keep going. I did, however, grab as many of the high caffeine GU packs as I could—I figured that being wired would help with pain management.

Aided by the psychological boost of my self-medication (and commentary from friends about my unusually slow pace), I chose to speed up. By the time I reached mile 5 my leg had gone comfortably numb and I increased my speed, steadily catching and passing people. Seven miles went by and then I caught up with a former student. He yelled “I can’t let you pass me Dr. L!” and went into a sprint. I decided to chase after him, believing that I could still hobble a mile even if I was left with only one working leg. Fortunately, the leg held up better than my student—I got past him, then several more people, then crossed the finish line running a not too bad 1:36 half-marathon. My leg remained attached, thus vindicating my choice. I then chose to stuff pizza into my pizza port—pausing only to cheer on people and pick up my age group award.

As the above narrative indicates, my view is that I was considering my options, assessing information from my body and deciding what to do. That is, I had cast myself as having what we philosophers like to call free will. From the inside, that is what it seems like. Maybe.

Of course, it would presumably seem the same way from the inside if I lacked free will. Spinoza, for example, claims that if a stone were conscious and hurled through the air, it would think it was free to choose to move and land where it does. As Spinoza saw it, people think they are free because they are “conscious of their own actions, and ignorant of the causes by which those actions are determined.” As such, on Spinoza’s view my “decisions” were not actual decisions. That is, I could not have chosen otherwise—like the stone, I merely did what I did and, in my ignorance, believed that I had decided my course.

Hobbes takes a somewhat similar view. What I would regard as a decision making process of assessing the pain and then picking my action, he would regard as a competition between two competing forces within the mechanisms of my brain. One force would be pulling towards stopping, the other towards going. Since the forces were closely matched for a moment, it felt as if I was deliberating. But the matter was determined: the go force was stronger and the outcome was set.

While current science would not bring in Spinoza’s God and would be more complicated than Hobbe’s view of the body, the basic idea would remain the same: the apparent decision making would be best explained by the working of the “neuromachinery” that is me—no choice, merely the workings of a purely mechanical (in the broad sense) organic machine. Naturally, many would throw in some quantum talk, but randomness does not provide any more freedom than strict determinism. Rolling dice does not make one free.

While I think that I am free and that I was making choices in the race, I have no way to prove that. At best, all that could be shown was that my “neuromachinery” was working normally and without unusual influence—no tumors, drugs or damage impeding the way it “should” work. Of course, some might take my behavior as clear evidence that there was something wrong, but they would be wrong.

Kant seems to have gotten it quite right: science can never prove that we have free will, but we certainly do want it. And pizza.

By Derived from a digital capture (photo/scan) of the book cover (creator of this digital version is irrelevant as the copyright in all equivalent images is still held by the same party). Copyright held by the publisher or the artist. Claimed as fair use regardless., Fair use, https://en.wikipedia.org/w/index.php?curid=8584152

In the Dr. Who story Inferno, the Doctor’s malfunctioning TARDIS console sends him to a parallel universe populated by counterparts of people from his reality. Ever philosophical, the Doctor responds to his discovery by engaging in this reasoning: “An infinity of universes. Ergo an infinite number of choices. So free will is not an illusion after all. The pattern can be changed.”

While the Doctor does not go into detail about his inference, his reasoning seems to be that since the one parallel universe he ended up in is different from his own in many ways (the United Kingdom is a fascist state in that universe and the Brigadier has an eye patch), it follows that at least some of the differences are due to different choices and this entails that free will is real.

While the idea of being able to empirically confirm free will is appealing, the Doctor’s inference is flawed: the existence of an infinite number of universes and differences between at least some would not show that free will is real.  And not just because Dr. Who is fiction. This is because the existence of differences between different universes would be consistent with an absence of free will.

One possibility is that determinism is true, but different universes are, well, different. That is, each universe is a deterministic universe with no free will, yet they are not identical. This would seem to make sense. After allm, two planets could be completely deterministic, yet different. As such, the people of Dr. Who’s original universe were determined to be the way they are, while the people of the parallel universe were determined to be the way they are. And they are different.

It could be objected that all (or at least some) universes are initially identical and hence any difference between them must be explained by metaphysical free will. However, even if it is granted, for the sake of argument, that all  (or some) universes start out identical to each other, it still does not follow that the explanation for differences between them is due to free will.

An obvious alternative explanation is that randomness is a determining factor, and each universe is random rather than deterministic. In this case, universes could differ from each other without free will. In support of this, the fact that dice rolls differ from each other does not require that dice have free will. Random chance would suffice. In this case, the people of the Doctor’s universe turned out as they did because of chance and the same is true of their counterparts—only the dice rolls were a bit different, so their England was fascist and their Brigadier had an eye patch.

If the Doctor had ended up in a universe just like his own (which he might—after all, there would be no way to tell the difference), this would not have disproved free will. While it is unlikely that all the choices made in the two universes would be the same, given an infinity of universes it would not be impossible. As such, differences between universes or a lack thereof would prove nothing about free will.

My position, as always, is that I should believe in free will. If I am right, then it is the right thing to believe. If I am wrong, then I could not have done otherwise or perhaps it was just the result of randomness. Either way, I would have no choice. That, I think, is about all that can be sensibly said about metaphysical free will.

In the previous essay I discussed how to assess experts. While people argue based on the views of experts, they also make arguments based on studies (and experiments). While using a study in an argument is reasonable, making a good argument based on a study requires being able to rationally assess studies.

Not surprisingly, people often select the studies they believe based on fallacious reasoning. One erroneous approach is to favor a study simply because it agrees with what one already believes. The mistake is that to infer a study is right because I believe the results gets things backwards. It should be first established that study is plausible, then it is reasonable for me to believe it.

Another erroneous approach is to accept a study as correct because one wants it to be so. For example, a liberal might accept a study that claims to prove that liberals are smarter and more generous than conservatives.  This sort of “reasoning” is the classic fallacy of wishful thinking. Wishing that something is true (or false) does not prove that the claim is true (or false).

Sometimes people attempt DIY “studies” by appealing to their own anecdotes. For example, someone might claim that poor people are lazy based on an experience with some poor person. While anecdotes can be interesting, taking an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.

While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations. The following provides a concise guide to evaluating studies and experiments.

In normal talk, people often jam together studies and experiments. While this is fine for informal purposes, the distinction is important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.

The objective of such an experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible and the more alike the two groups, the better the experiment.

The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course, and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.

Assuming that the experiment was conducted properly, whether the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment shows there is a causal connection it is important to know the size of the sample. After all, the difference between the experimental and control groups might be large but not significant. For example, imagine an experiment that involves 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is not statistically significant: the sample is so small, the difference could be due entirely to chance.

While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to pathogens to test their effects would be immoral. In such cases studies are used rather than experiments.

One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a cause. The difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to pathogens by accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.

After the study has run its course, the results are compared to see if there is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.

Since a study of this sort relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the cause to effect experiment. After all, in the study the researchers must take what they can find rather than conducting a proper experiment.

In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to try to sort things out.

Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause is not known, so those in the experimental group are those showing the effect.

Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if there is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study, which is why a study of adequate size is important.

Of the three methods, the effect-to-cause study is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would involve the complication that those who chew tobacco are often ex-smokers. As such, smoking might be the actual cause rather than the chewing. To sort this out would involve a study involving chewers who are not ex-smokers.

It is also worth referring to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.

As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.

Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double-blind experiment/study.

Overall, here are some key questions to ask when assessing a study:

 

  • Was the study/experiment properly conducted?
  • Was the sample size large enough?
  • Were the results statistically significant?
  • Were those conducting the study/experiment experts?

The argument from authority is a weak, but useful argument if used correctly. While people rarely follow the “strict” form of the argument, using it is to infer that a claim is true based on the (alleged) expertise of the person making the claim. Unlike deductive logic, the quality of an argument from authority does not depend on its logical structure but the quality of the expert making the claim. As a practical matter, anyone could be used as an “expert” in an argument from authority.  For example, someone might claim that secondhand smoke does not cause cancer because Michael Crichton claimed that it does not. At least he did before he died. As another example, someone might claim that astral projection/travel is real because Michael Crichton also claims it can occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, most medical experts claim that secondhand smoke does cause cancer. They do not, of course, claim that everyone who is exposed to it will get cancer or that no one who is not exposed to it will not get cancer. This is a claim about causation in populations: if everyone was exposed to secondhand smoke, then there would be more cases of cancer than if no one was.

If you are an expert in a field, you can pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that secondhand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an (actual) expert means being qualified to make an informed pick. An obvious problem is, of course, that experts pick different experts to accept as being correct.

The problem is far greater when it involves non-experts trying to pick between experts (and perhaps alleged experts). Being non-experts, they lack the expertise to make informed choices about which expert is most likely to be right. This raises the question of how to pick between experts when you are not an expert.

Not surprisingly, people tend to pick based on fallacious reasoning. One approach is to pick an expert because she agrees with what you already believe. This is not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).

Another common approach is to believe an expert because they make a claim you want to be true. For example, a smoker might elect to believe someone who claims secondhand smoke does not cause cancer because he does not want to believe he might increase the chance that his kids will get cancer. This “reasoning” is the fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

People also pick their expert based on qualities they see as positive but that are irrelevant to the person’s (logical) credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not relevant to assessing a person’s expertise.  For example, a person giving you advice about warts might be very likeable but be completely wrong about how warts should be treated.  

Fortunately, there standards for recognizing an expert. They are as follows.

 

  1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim will not be well supported. In contrast, a person who has expertise in a subject is more likely to be right about claims in their field. The challenge is being able to judge whether a person has sufficient expertise. In general, the question is whether a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.

 

  1. The claim being made by the person is within her area(s) of expertise.

If a person makes a claim about a subject outside of their expertise, then their expertise does not apply. Hence, the claim is not backed by the expertise and is not reliable. People often mistake expertise in one area (or being a celebrity) for expertise in another area. For example, an expert physicist’s claims about politics or ethics are not backed up by their expertise in physics. A person can be an expert in more than one field and there are cases where expertise in one field can be relevant in another. For example, a physicist who is also a professional ethicist would be an expert in both fields. As another example, a physicist’s expertise in nuclear weapons could be relevant to claims made in politics or ethics about nuclear weapons.

 

  1. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.

This is a very important factor. As a rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The majority of experts are more likely to be right than those who disagree with the majority.

As no field has complete agreement, a degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.

It is also important to be aware that the majority could be wrong. That said, it is reasonable for non-experts to go with the majority opinion because non-experts are, by definition, not experts. If I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.

 

  1. The person in question is not significantly biased.

 Experts, being people, are subject to biases and prejudices.  If someone is biased  in a way that would affect the reliability of their claims, then their credibility is reduced. This is because there would be reason to believe that the claim is being made because of bias or prejudice. For example, an expert being paid by an oil company who claims that fossil fuels are not causing climate change would be biased. But a biased expert’s claims could still be correct.

No one  is completely objective and a person will favor their own views. Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary from case to case. For example, most would suspect that researchers who receive funding from pharmaceutical companies will be biased while others might claim that the money would not sway them if the drugs proved ineffective or harmful.

Disagreement over bias can itself be a significant dispute. For example, those who doubt that climate change is real often assert that the climate experts are biased. Questioning an expert based on potential bias is a legitimate approach—if there is adequate evidence of bias that would be strong enough to unduly influence them One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from their claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.

These standards are not infallible. However, they do provide a guide to logically picking an expert to believe. They are certainly more logical than just picking the one who says things one likes.

 

A basic moral challenge is sorting out how people should be treated. This is often formulated in terms of obligations to others, and the usual question is “what, if anything, do we owe other people?” While some would like to exclude economics from ethics, the burden of proof rests on those claiming the realm of money deserves exemption from ethics. While this could be done, it will be assumed that economic matters fall under morality. But there are many approaches to morality.

While I use virtue theory as my personal ethics, I find aspects of Kant’s ethical theory appealing, so let us see what Kant’s theory might entail for economic justice. In terms of how we should treat others, Kant takes as foundational that “rational nature exists as an end in itself.”

Kant supports his view by asserting that “a man necessarily conceives his own existence as such” and this applies to all rational beings. A rational being sees itself as being an end, rather than a thing to be used as a means to an end.  In my own case, I see myself as a person who is an end and not as a thing that exists to serve the ends of others. But some other people might see me differently.

Of course, the fact that I see myself as an end would not seem to require that I extend this to other rational beings (that is, other people). After all, I could see myself as an end and regard others as means to my ends—to be used for my profit as, for example, underpaid workers.

However, Kant claims that I must regard other rational beings as ends as well. The reason is straightforward and is based on an appeal to consistency: if I am an end rather than a means because I am a rational being, then consistency requires I accept that other rational beings are ends. After all, if being a rational being makes me an end, it would do the same for others. Naturally, it could be argued that there is a relevant difference between myself and other rational beings that would warrant me treating them as means and not as ends. People have, obviously enough, long endeavored to justify treating other people as things. Slavery in America provides an example of this, as do many modern economic practices. However, there seems to be no principled way to insist on my own status as an end while denying the same to other rational beings. Which, one might suspect, is why some people wish to claim that other people are not rational beings. Or are otherwise inferior in some way that makes them suitable as means.

From his view of rational nature, Kant derives his practical imperative: “so act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.” This imperative does not mean that I must never treat a person as a means—that is allowed, provided I do not treat the person as a means only. So, for example, I would be morally forbidden from using people as mere means of revenue. I would, however, not be forbidden from having someone ring up my purchases at the grocery store—provided I treated the person as a person and not a mere means. One obvious challenge is sorting out what it is to treat a person as an end as opposed to just a means to an end. Some cases are obvious, such as enslaving another person. Other cases are more complex, such as hiring a person as a worker.

Many economic relationships seem to clearly violate Kant’s imperative in that they treat people as mere means and not at all as ends. To use an obvious example, if an employer treats her employees merely as means to profit and does not treat them as ends in themselves, then she is acting immorally by Kant’s standard. After all, being an employee does not rob a person of personhood.

One obvious reply is to question my starting assumption, namely that economics is not exempt from ethics. It could be argued that the relationship between employer and employee is purely economic and only economic considerations matter. That is, the workers are to be regarded as means to profit and treated in accord with this—even if doing so means treating them as things rather than people. The challenge is to show that the economic realm grants a special exemption to ethics. Of course, if it does this, then the exemption would be a general one. So, for example, people who decided to take money from the rich at gunpoint would be exempt from ethics as well. After all, if everyone is a means in economics, then the rich are just as much a means as employees and if economic coercion against people is acceptable, then so too is coercion via firearms. As always, the challenge the rich face in ethics is justifying their economic misdeeds while simultaneously condemning similar actions by the poor.

Another reply is to contend that might makes right. That is, the employer has the power and owes nothing to the employees beyond what they can force him to provide. This would make economics like the state of nature—where, as Hobbes said, “profit is the measure of right.” Of course, this leads to the same problem as the previous reply: if economics is a matter of might making right, then workers have the same right to use might against employers and the poor to use it against the rich.

One reason sometimes given to expand health care coverage is that if someone has health insurance, then they are less likely to use the emergency room for treatment. One reason for this is that someone with health insurance will be more likely to use primary care and less likely to need emergency room treatment. It also makes sense that a person with insurance would get more preventative care and be less likely to need a trip to the emergency room.

On the face of it, reducing emergency room visits would be good. One reason is that emergency room care is expensive and reducing it would save money. Another reason is that the emergency room should be for emergencies—reducing the number of visits can help free up resources and reduce waiting times.

So, extending insurance coverage would reduce emergency room visits and this is good. However, extending insurance might increase emergency room visits. In one seemingly credible study, insurance coverage resulted in more emergency room visits.

One obvious explanation is that the insured would be more likely to use medical services for the same reason that insured motorists are more likely to use the service of mechanics: they are more likely to be able to afford to pay the bills.

On the face of it, this does not seem bad. After all, if people can afford to go to the emergency room because they have insurance, that is better than having people suffer because they lack the means to pay. However, what is most interesting about the study is that the expansion of Medicaid coverage increased emergency room visits for treatments more suitable for a primary care environment. The increase in emergency use was significant—about 40%. The study was large enough that this is statistically significant.

Because of this, it is worth considering the impact of expanding coverage on emergency rooms. Especially if it is argued that expanding coverage would reduce costs by reducing emergency room visits.

One possibility is that the results from the Medicaid study would hold true in general, so that expansions of health care coverage would result in more emergency room visits. If an expansion of coverage results in significant increase in emergency room visits, this would not help reduce health care costs if people go to the more expensive emergency room rather than seeking primary care.

But an insurance expansion might not cause significantly more non-necessary emergency room visits. One reason is that private insurance companies seem to try to deter emergency room visits by imposing higher payments for patients. In contrast, Medicaid did not impose this higher cost. Thus, those with private insurance would tend to have a financial incentive to avoid the emergency room while those on Medicaid would not, unless there was an increased cost for the Medicaid patient. While it does seem wrong to impose a penalty for going to the emergency room, one method to channel patients towards non-emergency room treatment is to impose a financial penalty for emergency room visit for services that can be provided by primary care facilities. One moral concern with imposing such penalties is that some forms of care are only available through emergency rooms. For example, when I had to get rabies shots in 2023, the only option was the emergency room. But it could be replied that such treatments are unusual, hence the penalty would not affect many people.

People might use emergency rooms instead of primary care because they do not know their options. If so, if more people were better educated about medical options, they would be more likely to choose options other than the emergency room when they did not need emergency room services. Given that the emergency room is stressful and involves a long wait (especially for non-emergencies) people would probably elect for primary care when they know they have that option.  This is not to say education will be a cure-all, but it is likely to help reduce unnecessary emergency room visits. Which is certainly a worthwhile objective.

While running through Florida State University way back in December 2013, I noticed that the campus had been plastered with signs announcing that on January 1, 2014 the entire campus would be tobacco free. I was impressed by the extent of the plastering—there were plastic signs adhered to the sidewalks and many surfaces to ensure that all knew of the decree. Naturally, one of the people I saw placing the signs was smoking while doing so.

While running sometimes cause flashbacks, those signs flashed me back to my freshman English class at Marietta College.  In one essay, I argued for anti-smoking proposals, including some that were draconian. Apparently possessing the power of prophecy, I argued for area bans on smoking. My motivation was somewhat selfish: I hate the smell of tobacco smoke, and it causes my eyelids to swell and trouble breathing. As such, like a properly political person, I thought it good and just to recast the world according to my desires and beliefs.

I thought the paper was well argued and rational. However, the professor (an avowed liberal) assigned it a grade of 0.62. She also put a frowning face on it. And she called me a fascist. Interestingly, almost everything I proposed in the paper has come to pass (the campus wide ban being the latest). On the one hand, I do feel vindicated—if only because of my prophetic powers. On the other hand, I wobbled a bit between anarchism and authoritarianism in those days and that paper was clearly written during an authoritarian swing. Back in 2014 I reconsidered the ethics of the smoking ban and now, that Florida campuses have been smoke free for 12 years, I decided to revisit this issue.

While there are various ways to warrant area bans on certain behavior, three common justifications include claiming that the behavior is unpleasant, offensive or harmful. Or some combination of the three. In terms of justification, one option is to ban behavior based on its impact on the rights of others. That is, if the behavior is unpleasant, offensive or harmful to others it violates their rights to not be exposed to such behavior.

While I have no desire to observe behavior that is unpleasant, I do not have a right to not be exposed to the merely unpleasant. After all, what is unpleasant is subjective and area bans on the merely unpleasant would result in absurdity. For example, I would find someone wearing a vomit green sweater with neon pink goats unpleasant to view, but an area ban on unpleasant fashion would be absurd. The merely unpleasant does not impose enough on others to warrant banning it (providing that the unpleasantry does not cross over into harm). So, the mere fact that many people find smoking unpleasant does not warrant an area ban on it.

While I have no desire to be exposed to behavior I find offensive, I do not have a right to not be exposed to what is merely offensive. Even the very offensive. While what is offensive might be less subjective than the unpleasant, it still is subjective. As such, as with the merely unpleasant, an area ban on merely offensive behavior would lead to absurdity. For example, if the neon goats on the sweater mentioned above spelled out the words “philosophers are goat f@ckers”, I would find the sweater unpleasant and offensive. However, the merely offensive does not impose enough on my rights to warrant imposing on the rights of the offender. Naturally, offensive behavior can cross over into a violation of rights and that would warrant imposing on the offender. For example, if the sweater wearer insisted on following me and screaming “goat f@cker” at me while I am trying to teach, then that would go from being merely offensive to harassment. Thus, the fact that many people find smoking offensive would not warrant an area ban on smoking. Interestingly, it would also not warrant bans on public nudity,  at least those based on something being offensive.

Like most people, I have no desire to be harmed by the behavior of others and I think I have a right to not be harmed (although there are cases in which I can be justly harmed). For those who prefer not to talk of rights, I am also fine with the idea that it would just be wrong to harm me (at least in most cases). As such, it should be no surprise that I think area bans on behavior that harms others are acceptable. The obvious moral grounds would be Mill’s argument about liberty: what concerns only me and does not harm others is my own business and not their business. But actions that harm others become the business of those that are harmed.

While the basic idea that it is acceptable to limit behavior that harms others is appealing, one challenge is sorting out the sort of harms that warrants imposing on others. Going back to offensive behavior, it could be claimed that offensive behavior does cause harm. For example, someone might believe that his children would be harmed if they saw an unmarried couple kissing in public and thus claim that this should be banned from all public areas. As another example, a person might contend that seeing people catching fish would damage her emotionally because of the suffering of the fish and thus fishing should be banned from public areas. While these two examples might seem a bit silly, there are grey areas between the offensive and the clearly harmful.

Fortunately, the situation with smoking is clear cut. Tobacco smoke is physically harmful to those who breathe it in (whether they are smoking or not). As such, when someone is smoking in a public area, they are imposing an unchosen health risk on everyone else in the area of effect. Since the area is public, smokers have no right to do this. To use analogy, while a person has a right to wear the “goat f@cker” sweater mentioned above, they do not have a right to wear one that also constantly sprays poison. To use a less silly analogy, a person in a public area does not have the right to spit on people around them. While they could avoid this by staying away from her, she has no right to “control” the space around her with something that can harm others (spit can transmit disease). As such, it is morally acceptable to impose an area ban on smoking.

But behavior that does not harm others should not be subject to such bans. For example, drinking alcohol in public. Provided that the person is not engaging in otherwise harmful behavior, there seems to be no compelling moral reason to impose such a ban. After all, drinking a beer near people in public causes them no harm. Likewise, campus dress codes also lack a moral justification—provided that the attire does not inflict harm (like the imaginary poison spraying goat sweater). Merely being offensive or even distracting does not seem enough to warrant an area ban on moral grounds.

Pundits and politicians on the right consistently demonize the poor. For example, Fox News seems to delight in a narrative of the wicked poor destroying America. It is worth considering why the poor are demonized.

One ironic foundation for this is religion. While Jesus regards the poor as blessed and warns of the dangers of idolatry, there is a version of Christianity that sees poverty as a sign of damnation and wealth as an indicator of salvation. As some have pointed out, this view is a perversion of Christianity. Not surprisingly, some people have been criticized by pundits for heeding what Jesus said.

Another reason is that demonizing the poor allows pundits and politicians to redirect anger so that the have-less are angry at the have-nots, rather than those who have almost everything. This is classic scapegoating: the wicked poor are blamed for many of the woes besetting America. The irony is that the poor and powerless are cast as a threat to the rich and powerful.

The approach taken towards the poor follows a classic model used throughout history that involves presenting two distinct narratives about the target of hatred The first is to create a narrative which presents them as subhuman, wicked, inferior and defective. In the case of the poor, the narrative is that they are stupid, lazy, drug-users, criminals, frauds, mockers and so on. This narrative is used to create contempt and hatred to dehumanize them. This makes it much easier to get people to think it is morally permissible (even laudable) to treat the poor poorly.

The second narrative is to cast the poor as incredibly dangerous. While they have been cast as inferior by the first narrative, the second presents them as a dire threat. The narrative is that the wicked poor are destroying America by being “takers” from the “makers.” One obvious challenge is crafting a plausible narrative in which the poor and seemingly powerless can somehow destroy the rich and powerful. One solution has been to claim that another group, such as the Democrats or the Jews as being both very powerful (thus able to destroy America) yet someone in service to the poor.

On the face of it, a little reflection should expose the absurdity of this narrative. The poor are obviously poor and lack power. After all, if they had power, they would not remain poor. As such, the idea that the poor and powerless have the power to destroy America is absurd. True, the poor could rise up in arms and engage in class warfare in the literal sense of the term—but that is not likely to happen. While the idea that the poor are being served by a wicked group, such as the Democrats, is advanced to “solved” this problem, the wicked group, must also be cast as being inferior to the “true” Americans—yet also a powerful threat. This creates another absurdity that its adherents must ignore.

At this point, one might bring up “bread and circuses”—the idea that the poor destroyed the Roman Empire by forcing the rulers to provide them with bread and circuses until the empire fell apart.

There are two obvious replies to this. The first is that even if Rome was wrecked by spending on bread and circuses, it was the leaders who decided to use that approach to appease the masses. If this narrative were true, it entails that the wealthy and powerful decided to bankrupt the state to stay in power by appeasing the many. Second, the poor who wanted bread and circuses were a symptom rather than the disease. It was not so much that the poor were destroying the empire, it was that the destruction of the empire that was increasing the number of poor people.

The same could be said about the United States: while the income gap in the United States is extreme and poverty is high, it is not the poor that that are causing the decline of America. Rather, poverty is the result of the decline of the United States. As such, demonizing the poor and blaming them for the woes is like blaming the fever for the disease.

Ironically, demonizing and blaming the poor serves to distract people away from the real causes of our woes, such as the deranged financial system, systematic inequality, a rigged market and a political system that is beholden to the 1%. It is, however, a testament to the power of rhetoric that so many seem to accept the absurd idea that the poor and powerless are somehow the victimizers rather than the victims of the rich and powerful.