Back in 2014 Sandra Y.L. Korn proposed dispensing with academic freedom in favor of academic justice. Korn begins the essay with example of Harvard psychology Professor Richard Hernstein’s 1971 article for Atlantic Monthly. Hernstein endorsed the view that intelligence is primarily hereditary and linked to race. Hernstein was criticized for this view but was also defended by appeals to academic freedom. Korn seems to agree that the attacks on Hernstein impinged on academic freedom. However, Korn proposed that academic justice is more important than academic freedom.

Korn uses the American Association of University Professors view of academic freedom: “Teachers are entitled to full freedom in research and in the publication of the results.” However, Korn regards the “liberal obsession” with this freedom as misplaced. 

Korn notes that there is not “full freedom” in research and publication. As Korn correctly notes, which proposals get funded and which papers get published is largely a matter of academic politics. Korn also notes, correctly, that no academic question is free from the realities of politics. From this, Korn draws a conditional conclusion: “If our university community opposes racism, sexism, and heterosexism, why should we put up with research that counters our goals simply in the name of ‘academic freedom’?”

One might suspect a false dilemma is lurking here: either there is full academic freedom or restricting it on political values is acceptable. There is not full academic freedom. Therefore, restricting it on political values is acceptable. This would be a false dilemma because there are many options between full academic freedom such restrictions. As such, one could accept that there is not full academic freedom while also rejecting that academic freedom should be restricted on the proposed grounds.

To use an analogy to general freedom of expression, the fact that people do not possess full freedom of expression (there are limits on expression) does not entail that politically based restrictions should therefore be accepted. After all, there are many alternatives between full freedom and the specific restrictions being proposed.

To be fair to Korn, no such false dilemma might exist. Instead, Korn might be reasoning that because political values restrict academic expression it follows that adding additional restrictions is not a problem. To re-use the analogy to general free expression, the reasoning would that since there are already limits on free expression, more restrictions are (or could be) acceptable. This might be seen as a common practice fallacy but could be justified by showing that the proposed restrictions are warranted. Sorting this out requires considering what Korn is proposing.

In place of the academic freedom standard, Korn proposes “a more rigorous standard: one of ‘academic justice.’ When an academic community observes research promoting or justifying oppression, it should ensure that this research does not continue.”

While Korn claims this is a more rigorous standard, it seems to be only more restrictive. There is also the challenge of rigorously and accurately defining what it is for research to promote or justify oppression. While this was of concern way back in 2014, it is of even greater concern in 2026. This is because the American right has embraced the strategy of claiming that white, straight, men are the truest victims of “woke” oppression. This is part of a broader approach of the right to turn terms, tactics and strategies used by the left against them. For example, the right has used accusations of antisemitism to attack institutions of higher education.

Back in 2014, Korn proposed that students, faculty and workers should organize to “to make our universities look as we want them to do.” While that sounds democratic, there is still the concern about what standards should be used.

While there are paradigm cases (like the institutionalized racism of pre-civil rights America), people do use the term “oppression” to refer to what merely offends them. In fact, Korn refers to the offensiveness of a person’s comment as grounds for removing a professor.

 One danger is that the vagueness of this principle could be used to suppress and oppress research that vocal or influential people find offensive. There is also the concern that such a principle would create a hammer to beat down those who present dissenting or unpopular views. Ironically, this principle from 2014 would be ideal for “conversion” into a tool for the right: they could claim that “woke” and “DEI” views oppress white, straight men and hence “academic justice” would require suppressing such views. This would, of course, strike some as a perversion of the principle.

In closing, I favor justice and what is morally good. As such, I think people should be held morally accountable for their actions and statements. However, I do oppose restrictions on academic freedom for the same reason I oppose restrictions on the general freedom of expression. In the case of academic freedom, what should matter is whether the research is properly conducted and whether the claims are well-supported. To explicitly adopt a principle for deciding what is allowed and what is forbidden based on ideological views would, as history shows, have a chilling effect on research and academics. While the academic system is far from perfect, flawed research and false claims do get sorted out. Adding in a political test would not seem to help with reaching the goal of truth. Ironically, this sort of political test under the guise of addressing (imagined) oppression of white straight men (like me) is now being used by the right.

In terms of when academic freedom should be restricted, this is when an action creates enough harm to warrant limiting the freedom. Merely offending people is not enough to warrant restrictions—even if people are very offended. Threatening people or engaging in falsification of research results would not be protected by academic freedom.

As such, back in 2014 I was opposed to Korn’s modest proposal to impose more political restrictions on academic freedom. As Korn noted, there were already many restrictions in place—and there seemed to be no compelling reasons to add more. As this is being written in 2026, the right is using their own version of Korn’s principle and attempting to achieve their end of shaping the academy to fit their values. As would be suspected, I also oppose this.

On my runs, I often find lost phones, credit cards, wallets, IDs and other items. A few years ago, I came across a wallet fat with cash and credit cards. As always, I sought out the owner and returned it. Being a philosopher, I’m interested in the ethics of this.

While using found credit cards would be a bad idea and a crime, found cash is different. After all, cash is cash and there is nothing to link cash to a specific person. As money is useful, a person who finds a wallet stuffed with cash would have a practical reason to keep it. One exception would be if the reward for returning it exceeded the value of the cash—but the finder would have no idea if this was the case. So, from a purely practical standpoint, keeping cash would be a smart choice. A person could even return the credit cards and other items in the wallet, plausibly claiming that it was otherwise empty when found. However, a smart choice need not be the right choice.

One argument in favor of returning found items can be built on the golden rule: do unto others as you would have them do unto you. More formally, this is moral reasoning involving the method of reversing the situation. Since I would want my lost property returned, I should treat others the same. Unless I can justify treating others differently by finding relevant differences that would warrant the difference. Alternatively, it could also be justified on utilitarian grounds.  For example, someone who is poor might contend that it would not be wrong to keep money she found in a rich person’s wallet because the money would do her more good than it would for the rich person: such a small loss would not affect him, such a gain would benefit her significantly.

Since I am now not poor and find relative small sums of money (hundreds of dollars at most), I have had the luxury of not being tempted. However, even when I was a poor graduate student, I still returned whatever I found. Even when I honestly believed that I would put the money to better use than the original owner. This is due to ethics rather than some sort of devotion to America’s horrific class system.

One of the reasons is my belief that I do have obligations to help others, especially when the cost to me is low relative to the aid rendered. In the case of finding someone’s wallet or phone, I know that the loss would be a significant inconvenience for most people. In the case of a wallet, a person will need to replace a driver’s license, credit cards, insurance cards and worry about identity theft. It is easy for me to return the wallet—either by dropping it off with police or contacting the person after finding them via Facebook or some other means. That said, the challenge is justifying my view that I am so obligated. However, I would contend that in such cases, the burden of proof lies on the selfish rather than the altruistic.

Another reason is that I believe I should not steal. While keeping something you find differs from the morality of active theft (this could be seen as being like the distinction between killing and letting die), it does seem to be a form of theft. After all, I would be acquiring what does not belong to me by choosing not to return it. Naturally, if I have no means of returning it to the rightful owner (such as finding a quarter), then keeping it would probably not be theft. But it could be contended that keeping lost property is not theft (even when it could be returned easily), perhaps on the ancient principle of finders keepers, losers weepers. It could also be contended that theft is acceptable, which would be challenging. However, the burden of proof would seem to rest on those who claim that theft is acceptable or that keeping lost property when returning it would be quite possible is not theft. Naturally, there can be some specific exceptions.

I also return what I find for two selfish reasons. The first is that I want to build the sort of world I want to live in—and in that world people return what is lost. While my acting the way I want the world to be is a tiny thing, it is more than nothing. Second, I feel a psychological compulsion to return things I find—so I must do it for peace of mind.

Because of my work on metaphysical free will, it is hardly a shock that I am interested in whether sexual orientation is a choice. One problem with this issue is it seems impossible to prove (or disprove) the existence of free will in this, or any, context. As Kant argued, free will seems to lie beyond the reach of our knowledge. As such, it cannot be said with certainty that a person’s sexual orientation is a matter of choice. But this is nothing special: the same can be said about the person’s political party, religion, hobbies and so on.

Laying aside metaphysical speculation, it can be assumed (or pretended) that people do have a choice in some matters. Given this assumption, the question would seem to be whether sexual orientation is in the category of things that can be reasonably assumed to be matters of choice.

On the face of it, sexual orientation is within the domain of what a person finds sexually appealing and attractive. This falls within a larger set of what a person finds appealing and attractive in general.

Thanks to science, it seems reasonable to believe that some of what people find appealing and attractive has a foundation in our neural hardwiring rather than in choice. For example, humans find symmetrical faces more attractive than non-symmetrical faces and this does not seem to be a preference we choose. Folks who like the theory of evolution often claim that this preference exists because those with symmetrical faces are often healthier and hence better “choices” for reproductive activities.  

Food preferences also involve some hard wiring: humans like salty, fatty and sweet foods and the usual explanation also ties into evolution. For example, sweet foods are high calorie foods but are rare in nature, hence our ancestors who really liked sweets did better at surviving than those who did not really like sweets. Or some such story of survival of the sweetest.

Assuming such hardwired preferences, it makes sense that sexual preferences also involve at least some hardwiring. So, for example, a person might be hardwired to prefer light hair over dark hair.  Then again, the preference might be based on experience—the person might have had positive experiences with those with light hair and thus was conditioned to have that preference. The challenge is, of course, to sort out the causal role of hard wiring from the causal role of experience (including socialization). What is left over might be what could be described as choice.

In the case of sexual orientation, it seems reasonable to have some doubts about experience being the primary factor. After all, homosexual behavior has often been condemned, discouraged and punished. As such, it seems less likely that people would be socialized into being homosexual—especially in places where being homosexual is punishable by death. However, this is not impossible—perhaps people could be somehow socialized into being gay by all the social efforts to make them be straight.

Hardwiring for sexual orientation does seem plausible. This is mainly because there seems to be a lack of evidence that homosexuality is chosen. Assuming that the options are choice, nature or nurture, then eliminating choice and nurture would leave nature. But, of course, this could be a false trilemma as there might be other options.

It can be objected that people do choose homosexual behavior and thus being homosexual is a choice. While this does have some appeal, it is important to distinguish between a person’s orientation and what the person chooses to do. A person might be heterosexual and choose to engage in homosexual activity for practical reasons or curiosity. A homosexual might act like a heterosexual to avoid being killed. However, these choices would not change their orientation. As such, my view is that while behavior can be chosen, orientation is probably not.

As a runner, I have been accused of being a masochist or at least possessing masochistic tendencies. As I routinely subject myself to pain and my previous essay about running and freedom was pain focused, this is hardly surprising. Other runners, especially those masochistic ultra-marathon runners, are often accused of masochism.

In some cases, the accusation is not serious. Usually, people just observe that runners do things that both hurt and make little sense to nonrunners. However, some see runners as masochists in a strict sense. Being a runner and a philosopher, I find this interesting, especially when I am the one accused of being a masochist.

Some do accuse runners of being masochists with some seriousness. While some say runners are masochists in jest or with some respect for the toughness of runners, it is sometimes presented as an accusation: that there is something wrong with runners and running is deviant behavior. While runners do like to joke about being odd and different, we probably prefer to not be seen as mentally ill deviants. After all, that would indicate that we are doing something wrong—which I believe is (usually) not the case. Based on my experience and meeting thousands of runners, I think that runners are generally not masochists.

Given that runners engage in painful activities (such as speed work and racing marathons) and that they often run despite injuries, it is tempting to believe they are masochists and that I am in denial about our collective deviance.

While this does have some appeal, it rests on a confusion about masochism in terms of means and ends. For the masochist, pain is a means to the end of pleasure. The masochist does not seek pain for the sake of pain, but seeks pain to achieve pleasure. However, there is a special connection between the means of pain and the end of pleasure: for the masochist, the pleasure they desire is that which is generated specifically by pain. While a masochist can get pleasure by other means (such as drugs, cake or drug cakes), it is the desire for pleasure caused by pain that defines the masochist. So, the pain is not optional—mere pleasure is not the end, but pleasure caused by pain.

This is different from those who endure pain as part of achieving an end, be that end pleasure or some other end. For those who endure pain to achieve an end, the pain can be part of the means or, more accurately, as an effect of the means. It is valuing the end that causes the person to endure the pain to achieve the end—the pain is not sought out as being the “proper cause” of the end. In the case of the masochist, the pain is not endured to achieve an end—it is the “proper cause” of the end, which is pleasure.

In the case of running, runners usually see pain as something to be endured as part of the process of achieving their desired ends, such as fitness or victory. However, runners usually prefer to avoid pain when they can. For example, while I endure pain to run a race, I prefer running with as little pain as possible. This is like a person putting up with the unpleasant aspects of a job to make money—but they would prefer as little unpleasantness as possible. After all, she is in it for the money, not the unpleasant aspects of work. Likewise, a runner is typically running for some other end (or ends) than hurting herself.  It just so happens that achieving that end (or ends) requires doing things that cause pain.

In my essay on running and freedom, I described how I endured pain while running the Tallahassee Half Marathon. If I were a masochist, experiencing pleasure by means of that pain would have been my primary end. However, my primary end was to run the half marathon well and the pain was an obstacle to that end. As such, I would have been glad to have had a painless start and I was pleased when the pain diminished. I enjoy the running and I do enjoy overcoming pain, but I do not enjoy the pain itself—hence the aspirin in my medicine cabinet.

While I cannot speak for all runners, my experience is that runners do not run for pain, they run despite the pain. Thus, we are not masochists. We might, however, show some poor judgment when it comes to pain and injury—but that is another matter. But I would suggest to any masochists that they do take up running, as running is really good for a person.

A few years ago, I was doing my pre-race day run and, for no apparent reason, my left leg began to hurt. I made my way home, estimating the odds of a recovery by the next day. On the morning of the race, my leg felt better and my short pre-race run went well. Just before the start, I was optimistic: it seemed my leg would be fine. Then the race started. Then the pain started.

I hobbled forward and “accelerated” to an 8:30 per minute mile (the downside of a GPS watch is that I cannot lie to myself). The beast of pain grew strong and tore at my will. Behind that armor, my fear and doubt hid—urging me to drop out with whispered pleas. At that moment of weakness, I considered doing the unthinkable: hobbling to the curb and leaving the race.

From the inside this seemed a paradigm example of freedom of the will: I could elect to push through the pain, or I could take the curb. It was all up to me. While I was once pulled from a race because of injuries, at that time I had never left one by choice—and I decided that this would not be my first. I kept going and the pain got worse.

At this point in the race, I considered that my pride was pushing me to destruction or at least a fall. Fortunately, decades of running had trained me in pain assessment: like most veteran runners I am good at distinguishing between what merely hurts and what is causing significant damage. Carefully considering the nature of the pain and the condition of my leg, I judged that it was mere pain. While I could have decided to stop, I decided to keep going. I did, however, grab as many of the high caffeine GU packs as I could—I figured that being wired would help with pain management.

Aided by the psychological boost of my self-medication (and commentary from friends about my unusually slow pace), I chose to speed up. By the time I reached mile 5 my leg had gone comfortably numb and I increased my speed, steadily catching and passing people. Seven miles went by and then I caught up with a former student. He yelled “I can’t let you pass me Dr. L!” and went into a sprint. I decided to chase after him, believing that I could still hobble a mile even if I was left with only one working leg. Fortunately, the leg held up better than my student—I got past him, then several more people, then crossed the finish line running a not too bad 1:36 half-marathon. My leg remained attached, thus vindicating my choice. I then chose to stuff pizza into my pizza port—pausing only to cheer on people and pick up my age group award.

As the above narrative indicates, my view is that I was considering my options, assessing information from my body and deciding what to do. That is, I had cast myself as having what we philosophers like to call free will. From the inside, that is what it seems like. Maybe.

Of course, it would presumably seem the same way from the inside if I lacked free will. Spinoza, for example, claims that if a stone were conscious and hurled through the air, it would think it was free to choose to move and land where it does. As Spinoza saw it, people think they are free because they are “conscious of their own actions, and ignorant of the causes by which those actions are determined.” As such, on Spinoza’s view my “decisions” were not actual decisions. That is, I could not have chosen otherwise—like the stone, I merely did what I did and, in my ignorance, believed that I had decided my course.

Hobbes takes a somewhat similar view. What I would regard as a decision making process of assessing the pain and then picking my action, he would regard as a competition between two competing forces within the mechanisms of my brain. One force would be pulling towards stopping, the other towards going. Since the forces were closely matched for a moment, it felt as if I was deliberating. But the matter was determined: the go force was stronger and the outcome was set.

While current science would not bring in Spinoza’s God and would be more complicated than Hobbe’s view of the body, the basic idea would remain the same: the apparent decision making would be best explained by the working of the “neuromachinery” that is me—no choice, merely the workings of a purely mechanical (in the broad sense) organic machine. Naturally, many would throw in some quantum talk, but randomness does not provide any more freedom than strict determinism. Rolling dice does not make one free.

While I think that I am free and that I was making choices in the race, I have no way to prove that. At best, all that could be shown was that my “neuromachinery” was working normally and without unusual influence—no tumors, drugs or damage impeding the way it “should” work. Of course, some might take my behavior as clear evidence that there was something wrong, but they would be wrong.

Kant seems to have gotten it quite right: science can never prove that we have free will, but we certainly do want it. And pizza.

By Derived from a digital capture (photo/scan) of the book cover (creator of this digital version is irrelevant as the copyright in all equivalent images is still held by the same party). Copyright held by the publisher or the artist. Claimed as fair use regardless., Fair use, https://en.wikipedia.org/w/index.php?curid=8584152

In the Dr. Who story Inferno, the Doctor’s malfunctioning TARDIS console sends him to a parallel universe populated by counterparts of people from his reality. Ever philosophical, the Doctor responds to his discovery by engaging in this reasoning: “An infinity of universes. Ergo an infinite number of choices. So free will is not an illusion after all. The pattern can be changed.”

While the Doctor does not go into detail about his inference, his reasoning seems to be that since the one parallel universe he ended up in is different from his own in many ways (the United Kingdom is a fascist state in that universe and the Brigadier has an eye patch), it follows that at least some of the differences are due to different choices and this entails that free will is real.

While the idea of being able to empirically confirm free will is appealing, the Doctor’s inference is flawed: the existence of an infinite number of universes and differences between at least some would not show that free will is real.  And not just because Dr. Who is fiction. This is because the existence of differences between different universes would be consistent with an absence of free will.

One possibility is that determinism is true, but different universes are, well, different. That is, each universe is a deterministic universe with no free will, yet they are not identical. This would seem to make sense. After allm, two planets could be completely deterministic, yet different. As such, the people of Dr. Who’s original universe were determined to be the way they are, while the people of the parallel universe were determined to be the way they are. And they are different.

It could be objected that all (or at least some) universes are initially identical and hence any difference between them must be explained by metaphysical free will. However, even if it is granted, for the sake of argument, that all  (or some) universes start out identical to each other, it still does not follow that the explanation for differences between them is due to free will.

An obvious alternative explanation is that randomness is a determining factor, and each universe is random rather than deterministic. In this case, universes could differ from each other without free will. In support of this, the fact that dice rolls differ from each other does not require that dice have free will. Random chance would suffice. In this case, the people of the Doctor’s universe turned out as they did because of chance and the same is true of their counterparts—only the dice rolls were a bit different, so their England was fascist and their Brigadier had an eye patch.

If the Doctor had ended up in a universe just like his own (which he might—after all, there would be no way to tell the difference), this would not have disproved free will. While it is unlikely that all the choices made in the two universes would be the same, given an infinity of universes it would not be impossible. As such, differences between universes or a lack thereof would prove nothing about free will.

My position, as always, is that I should believe in free will. If I am right, then it is the right thing to believe. If I am wrong, then I could not have done otherwise or perhaps it was just the result of randomness. Either way, I would have no choice. That, I think, is about all that can be sensibly said about metaphysical free will.

In the previous essay I discussed how to assess experts. While people argue based on the views of experts, they also make arguments based on studies (and experiments). While using a study in an argument is reasonable, making a good argument based on a study requires being able to rationally assess studies.

Not surprisingly, people often select the studies they believe based on fallacious reasoning. One erroneous approach is to favor a study simply because it agrees with what one already believes. The mistake is that to infer a study is right because I believe the results gets things backwards. It should be first established that study is plausible, then it is reasonable for me to believe it.

Another erroneous approach is to accept a study as correct because one wants it to be so. For example, a liberal might accept a study that claims to prove that liberals are smarter and more generous than conservatives.  This sort of “reasoning” is the classic fallacy of wishful thinking. Wishing that something is true (or false) does not prove that the claim is true (or false).

Sometimes people attempt DIY “studies” by appealing to their own anecdotes. For example, someone might claim that poor people are lazy based on an experience with some poor person. While anecdotes can be interesting, taking an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.

While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations. The following provides a concise guide to evaluating studies and experiments.

In normal talk, people often jam together studies and experiments. While this is fine for informal purposes, the distinction is important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.

The objective of such an experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible and the more alike the two groups, the better the experiment.

The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course, and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.

Assuming that the experiment was conducted properly, whether the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment shows there is a causal connection it is important to know the size of the sample. After all, the difference between the experimental and control groups might be large but not significant. For example, imagine an experiment that involves 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is not statistically significant: the sample is so small, the difference could be due entirely to chance.

While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to pathogens to test their effects would be immoral. In such cases studies are used rather than experiments.

One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a cause. The difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to pathogens by accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.

After the study has run its course, the results are compared to see if there is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.

Since a study of this sort relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the cause to effect experiment. After all, in the study the researchers must take what they can find rather than conducting a proper experiment.

In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to try to sort things out.

Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause is not known, so those in the experimental group are those showing the effect.

Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if there is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study, which is why a study of adequate size is important.

Of the three methods, the effect-to-cause study is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would involve the complication that those who chew tobacco are often ex-smokers. As such, smoking might be the actual cause rather than the chewing. To sort this out would involve a study involving chewers who are not ex-smokers.

It is also worth referring to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.

As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.

Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double-blind experiment/study.

Overall, here are some key questions to ask when assessing a study:

 

  • Was the study/experiment properly conducted?
  • Was the sample size large enough?
  • Were the results statistically significant?
  • Were those conducting the study/experiment experts?

The argument from authority is a weak, but useful argument if used correctly. While people rarely follow the “strict” form of the argument, using it is to infer that a claim is true based on the (alleged) expertise of the person making the claim. Unlike deductive logic, the quality of an argument from authority does not depend on its logical structure but the quality of the expert making the claim. As a practical matter, anyone could be used as an “expert” in an argument from authority.  For example, someone might claim that secondhand smoke does not cause cancer because Michael Crichton claimed that it does not. At least he did before he died. As another example, someone might claim that astral projection/travel is real because Michael Crichton also claims it can occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, most medical experts claim that secondhand smoke does cause cancer. They do not, of course, claim that everyone who is exposed to it will get cancer or that no one who is not exposed to it will not get cancer. This is a claim about causation in populations: if everyone was exposed to secondhand smoke, then there would be more cases of cancer than if no one was.

If you are an expert in a field, you can pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that secondhand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an (actual) expert means being qualified to make an informed pick. An obvious problem is, of course, that experts pick different experts to accept as being correct.

The problem is far greater when it involves non-experts trying to pick between experts (and perhaps alleged experts). Being non-experts, they lack the expertise to make informed choices about which expert is most likely to be right. This raises the question of how to pick between experts when you are not an expert.

Not surprisingly, people tend to pick based on fallacious reasoning. One approach is to pick an expert because she agrees with what you already believe. This is not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).

Another common approach is to believe an expert because they make a claim you want to be true. For example, a smoker might elect to believe someone who claims secondhand smoke does not cause cancer because he does not want to believe he might increase the chance that his kids will get cancer. This “reasoning” is the fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

People also pick their expert based on qualities they see as positive but that are irrelevant to the person’s (logical) credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not relevant to assessing a person’s expertise.  For example, a person giving you advice about warts might be very likeable but be completely wrong about how warts should be treated.  

Fortunately, there standards for recognizing an expert. They are as follows.

 

  1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim will not be well supported. In contrast, a person who has expertise in a subject is more likely to be right about claims in their field. The challenge is being able to judge whether a person has sufficient expertise. In general, the question is whether a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.

 

  1. The claim being made by the person is within her area(s) of expertise.

If a person makes a claim about a subject outside of their expertise, then their expertise does not apply. Hence, the claim is not backed by the expertise and is not reliable. People often mistake expertise in one area (or being a celebrity) for expertise in another area. For example, an expert physicist’s claims about politics or ethics are not backed up by their expertise in physics. A person can be an expert in more than one field and there are cases where expertise in one field can be relevant in another. For example, a physicist who is also a professional ethicist would be an expert in both fields. As another example, a physicist’s expertise in nuclear weapons could be relevant to claims made in politics or ethics about nuclear weapons.

 

  1. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.

This is a very important factor. As a rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The majority of experts are more likely to be right than those who disagree with the majority.

As no field has complete agreement, a degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.

It is also important to be aware that the majority could be wrong. That said, it is reasonable for non-experts to go with the majority opinion because non-experts are, by definition, not experts. If I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.

 

  1. The person in question is not significantly biased.

 Experts, being people, are subject to biases and prejudices.  If someone is biased  in a way that would affect the reliability of their claims, then their credibility is reduced. This is because there would be reason to believe that the claim is being made because of bias or prejudice. For example, an expert being paid by an oil company who claims that fossil fuels are not causing climate change would be biased. But a biased expert’s claims could still be correct.

No one  is completely objective and a person will favor their own views. Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary from case to case. For example, most would suspect that researchers who receive funding from pharmaceutical companies will be biased while others might claim that the money would not sway them if the drugs proved ineffective or harmful.

Disagreement over bias can itself be a significant dispute. For example, those who doubt that climate change is real often assert that the climate experts are biased. Questioning an expert based on potential bias is a legitimate approach—if there is adequate evidence of bias that would be strong enough to unduly influence them One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from their claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.

These standards are not infallible. However, they do provide a guide to logically picking an expert to believe. They are certainly more logical than just picking the one who says things one likes.

 

A basic moral challenge is sorting out how people should be treated. This is often formulated in terms of obligations to others, and the usual question is “what, if anything, do we owe other people?” While some would like to exclude economics from ethics, the burden of proof rests on those claiming the realm of money deserves exemption from ethics. While this could be done, it will be assumed that economic matters fall under morality. But there are many approaches to morality.

While I use virtue theory as my personal ethics, I find aspects of Kant’s ethical theory appealing, so let us see what Kant’s theory might entail for economic justice. In terms of how we should treat others, Kant takes as foundational that “rational nature exists as an end in itself.”

Kant supports his view by asserting that “a man necessarily conceives his own existence as such” and this applies to all rational beings. A rational being sees itself as being an end, rather than a thing to be used as a means to an end.  In my own case, I see myself as a person who is an end and not as a thing that exists to serve the ends of others. But some other people might see me differently.

Of course, the fact that I see myself as an end would not seem to require that I extend this to other rational beings (that is, other people). After all, I could see myself as an end and regard others as means to my ends—to be used for my profit as, for example, underpaid workers.

However, Kant claims that I must regard other rational beings as ends as well. The reason is straightforward and is based on an appeal to consistency: if I am an end rather than a means because I am a rational being, then consistency requires I accept that other rational beings are ends. After all, if being a rational being makes me an end, it would do the same for others. Naturally, it could be argued that there is a relevant difference between myself and other rational beings that would warrant me treating them as means and not as ends. People have, obviously enough, long endeavored to justify treating other people as things. Slavery in America provides an example of this, as do many modern economic practices. However, there seems to be no principled way to insist on my own status as an end while denying the same to other rational beings. Which, one might suspect, is why some people wish to claim that other people are not rational beings. Or are otherwise inferior in some way that makes them suitable as means.

From his view of rational nature, Kant derives his practical imperative: “so act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.” This imperative does not mean that I must never treat a person as a means—that is allowed, provided I do not treat the person as a means only. So, for example, I would be morally forbidden from using people as mere means of revenue. I would, however, not be forbidden from having someone ring up my purchases at the grocery store—provided I treated the person as a person and not a mere means. One obvious challenge is sorting out what it is to treat a person as an end as opposed to just a means to an end. Some cases are obvious, such as enslaving another person. Other cases are more complex, such as hiring a person as a worker.

Many economic relationships seem to clearly violate Kant’s imperative in that they treat people as mere means and not at all as ends. To use an obvious example, if an employer treats her employees merely as means to profit and does not treat them as ends in themselves, then she is acting immorally by Kant’s standard. After all, being an employee does not rob a person of personhood.

One obvious reply is to question my starting assumption, namely that economics is not exempt from ethics. It could be argued that the relationship between employer and employee is purely economic and only economic considerations matter. That is, the workers are to be regarded as means to profit and treated in accord with this—even if doing so means treating them as things rather than people. The challenge is to show that the economic realm grants a special exemption to ethics. Of course, if it does this, then the exemption would be a general one. So, for example, people who decided to take money from the rich at gunpoint would be exempt from ethics as well. After all, if everyone is a means in economics, then the rich are just as much a means as employees and if economic coercion against people is acceptable, then so too is coercion via firearms. As always, the challenge the rich face in ethics is justifying their economic misdeeds while simultaneously condemning similar actions by the poor.

Another reply is to contend that might makes right. That is, the employer has the power and owes nothing to the employees beyond what they can force him to provide. This would make economics like the state of nature—where, as Hobbes said, “profit is the measure of right.” Of course, this leads to the same problem as the previous reply: if economics is a matter of might making right, then workers have the same right to use might against employers and the poor to use it against the rich.

One reason sometimes given to expand health care coverage is that if someone has health insurance, then they are less likely to use the emergency room for treatment. One reason for this is that someone with health insurance will be more likely to use primary care and less likely to need emergency room treatment. It also makes sense that a person with insurance would get more preventative care and be less likely to need a trip to the emergency room.

On the face of it, reducing emergency room visits would be good. One reason is that emergency room care is expensive and reducing it would save money. Another reason is that the emergency room should be for emergencies—reducing the number of visits can help free up resources and reduce waiting times.

So, extending insurance coverage would reduce emergency room visits and this is good. However, extending insurance might increase emergency room visits. In one seemingly credible study, insurance coverage resulted in more emergency room visits.

One obvious explanation is that the insured would be more likely to use medical services for the same reason that insured motorists are more likely to use the service of mechanics: they are more likely to be able to afford to pay the bills.

On the face of it, this does not seem bad. After all, if people can afford to go to the emergency room because they have insurance, that is better than having people suffer because they lack the means to pay. However, what is most interesting about the study is that the expansion of Medicaid coverage increased emergency room visits for treatments more suitable for a primary care environment. The increase in emergency use was significant—about 40%. The study was large enough that this is statistically significant.

Because of this, it is worth considering the impact of expanding coverage on emergency rooms. Especially if it is argued that expanding coverage would reduce costs by reducing emergency room visits.

One possibility is that the results from the Medicaid study would hold true in general, so that expansions of health care coverage would result in more emergency room visits. If an expansion of coverage results in significant increase in emergency room visits, this would not help reduce health care costs if people go to the more expensive emergency room rather than seeking primary care.

But an insurance expansion might not cause significantly more non-necessary emergency room visits. One reason is that private insurance companies seem to try to deter emergency room visits by imposing higher payments for patients. In contrast, Medicaid did not impose this higher cost. Thus, those with private insurance would tend to have a financial incentive to avoid the emergency room while those on Medicaid would not, unless there was an increased cost for the Medicaid patient. While it does seem wrong to impose a penalty for going to the emergency room, one method to channel patients towards non-emergency room treatment is to impose a financial penalty for emergency room visit for services that can be provided by primary care facilities. One moral concern with imposing such penalties is that some forms of care are only available through emergency rooms. For example, when I had to get rabies shots in 2023, the only option was the emergency room. But it could be replied that such treatments are unusual, hence the penalty would not affect many people.

People might use emergency rooms instead of primary care because they do not know their options. If so, if more people were better educated about medical options, they would be more likely to choose options other than the emergency room when they did not need emergency room services. Given that the emergency room is stressful and involves a long wait (especially for non-emergencies) people would probably elect for primary care when they know they have that option.  This is not to say education will be a cure-all, but it is likely to help reduce unnecessary emergency room visits. Which is certainly a worthwhile objective.