Some will remember that driverless cars were going to be the next big thing. Tech companies rushed to flush cash into this technology and media covered the stories. Including the injuries and deaths involving the technology. But, for a while, we were promised a future in which our cars would whisk us around, then drive away to await the next trip. Fully autonomous vehicles, it seemed, were always just a few years away. But it did seem like a good idea at the time and proponents of the tech also claimed to be motivated by a desire to save lives. From 2000 to 2015, motor vehicle deaths per year ranged from a high of 43,005 in 2005 to a low of 32,675 in 2014. In 2015 there were 35,092 motor vehicle deaths and recently the number went back up to around 40,000. Given the high death toll, there is clearly a problem that needs to be solved.

While predictions of the imminent arrival of autonomous vehicles proved overly optimistic, the claim that they would reduce motor vehicle deaths had some plausibility. Autonomous will do not suffer from road rage, exhaustion, intoxication, poor judgment, distraction and other conditions that contribute to the death tolls. Motor vehicle deaths would not be eliminated even if all vehicles were autonomous, but the promised reduction in deaths presented a moral and practical reason to deploy such vehicles. In the face of various challenges and a lack of success, the tech companies seem to have largely moved on from the old toy to the new toy, which is AI. But this might not be a bad thing if driverless cars were aimed at solving the wrong problems and we instead solve the right problems. Discussing this requires going back to a bit of automotive history.

As the number of cars increased in the United States, so did the number of deaths, which was hardly surprising. A contributing factor was the abysmal safety of American cars.  This problem led Ralph Nader to write his classic work, Unsafe at Any Speed. Thanks to Nader and others, the American automobile became much safer and vehicle fatalities decreased. While making cars safer was a good thing, this approach was fundamentally flawed.

Imagine a strange world in which people insist on constantly swinging hammers as they go about their day.  As would be suspected, the hammer swinging would often result in injuries and property damage. Confronted by these harms, solutions are proposed and implemented. People wear ever better helmets and body armor to protect them from wild swings and hammers that slip from peoples’ grasp. Hammers are also regularly redesigned so that they inflict less damage when hitting people and objects.  The Google of that world and other companies start working on autonomous swinging hammers that will be much better than humans at avoiding hitting other people and things. While all these safety improvements would be better than the original situation of unprotected people swinging dangerous hammers around, this approach is fundamentally flawed. After all, if people stopped swinging hammers around, then the problem would be solved.

An easy and obvious reply to my analogy is that using motor vehicles, unlike random hammer swinging, is important. A large part of the American economy is built around the motor vehicle. This includes obvious things like vehicle sales, vehicle maintenance, gasoline sales, road maintenance and so on. It also includes less obvious aspects of the economy that involve the motor vehicle, such as how they contribute to the success of stores like Wal Mart. The economic value of the motor vehicle, it can be argued, provides a justification for accepting the thousands of deaths per year. While it is certainly desirable to reduce these deaths, getting rid of motor vehicles is not a viable economic option. Thus, autonomous vehicles would be a good partial solution to the death problem. Or are they?

One problem is that driverless vehicles are trying to solve the death problem within a system created around human drivers and their wants. This system of lights, signs, turn lanes, crosswalks and such is extremely complicated and presents difficult engineering and programing problems. It would seem to have made more sense to use the resources that were poured into autonomous vehicles to develop a better and safer transportation system that does not center around a bad idea: the individual motor vehicle operating within a complicated system. On this view, autonomous vehicles are solving an unnecessary problem: they are merely better hammers.

My reasoning can be countered in a couple ways. One is to repeat the economic argumen: autonomous vehicles preserve the individual motor vehicle that is economically critical while being likely to reduce the death tax vehicles impose. A second approach is to argue the cost of creating a new transportation system would be far more than the cost of developing autonomous vehicles that can operate within the existing system. This assumes, of course, that the cash dumped on this technology will eventually pay off.

A third approach is to argue that autonomous vehicles could be a step towards a new transportation system. People often need a gradual adjustment to major changes and autonomous vehicles would allow a gradual transition from distracted human drivers to autonomous vehicles operating with the distracted humans to a transportation infrastructure rebuilt entirely around autonomous vehicles (perhaps with a completely distinct system for walkers, bikers and runners). Going back to the hammer analogy, the self-swinging hammer would reduce hammer injuries and could allow a transition to be made away from hammer swinging altogether.

While this has some appeal, it still makes more sense to stop swinging hammers. If the goal is to reduce traffic deaths and injuries, then investing in better public transportation, safer streets, and a move away from car-centric cities would have been the rational choice. For the most part it seems that tech companies and investors have moved away from solving the transportation problem and are now focused on AI. While the driverless car was a very narrow type of AI focused on driving vehicles and supposedly aimed at increasing safety and convenience, the new AI is broader (they are trying to jam it into almost everything that has a chip) and is supposed to be aimed at solving a vast range of problems. Given the apparent failure of driverless cars, we should consider there will be a similar outcome with this broader AI. It is also reasonable to expect that once the current AI bubble bursts, the next bubble will float over the horizon. This is not to deny that some of what people call AI is useful, but that we need to keep in mind that the tech companies seem to often focus on solving unnecessary problems rather than removing these problems.

 

The heat and humidity of my adopted state of Florida are not just uncomfortable but dangerous. From 2010 to 2020 Florida had 215 reported heat-related deaths but these deaths have increased 95% from 2010 to 2022. This is what would be expected, given that climate change has led to ever warmer summer temperatures in Florida. In my own experience, running or doing outside work in the summer is brutal. As such, it makes sense that recently Miami-Dade County had proposed requiring that construction and farm workers get 10-minute breaks in the shade for every two hours worked outside. In response, the Republican controlled Florida legislature and Governor DeSantis rushed into action, passing and signing HB433. This law makes all local heat protection measures “void and prohibited.” Instead, the state standards would apply, although none exist. Florida is, of course, subject to Federal OSHA requirements (state and local workers are excluded) and these require employers to keep workplaces free of recognized hazards that cause or are likely to cause death or serious harm and this includes heat dangers.

This provides another good example of the inconsistency in the professed principles of the Republican party. After all, Republicans usually stress how they support states’ rights against the federal government and local rights against state government. However, Republicans do not seem to believe in this principle. Rather, their position on bigger versus smaller governments seems to depend entirely on which level of government is doing what they want. For example, since the Democrats could (but will not) pass a federal abortion law, the Republican’s profess the principle that the states should decide on this issue.

But the party wants to put a federal abortion ban in place when Trump is re-elected. When that happens, they will employ their stock argument for when they want the bigger government entity to decide, which is to contend that allowing local control will create a patchwork of laws and regulations and that it is better to have uniform laws. As HB433 and other example show, they only apply this principle when the uniform laws are the laws they want. When the uniform law is one they do not like, they profess a love of local governance. That is, their principle is that they want the law to be what they like and not what they do not like. Being honest about this might look bad, hence they present the illusion of having a principle in their arguments and rhetoric. But I often wonder if they even need to do this. The fact that they take time to profess a principle they clearly do not follow suggests that they think they need to do so. This might be because they think it will fool those who care about the principle but somehow do not notice that they do not follow it. Alternatively, it might be aimed at allowing rationalization. For example, a Republican voter can tell themselves that this law is good because it makes the law uniform and avoids a patchwork. Then, when the same voter hears Trump say that abortion should be decided by the states because local governments should decide, they can tell themselves that this is good and true. It seems simpler to just be honest, but there is probably some reason why Republicans persist in professing principles they clearly do not believe.

The defense of the bill also provides another good example of how Republicans argue against regulations aimed at protecting people from harm inflicted by businesses. Republican Rep. Tiffany Esposito ably presented the stock jobs argument of the Republican party: “This is very much a people-centric bill. If we want to talk about Floridians thriving, they do that by having good job opportunities. And if you want to talk about health and wellness, and you want to talk about how we can make sure that all Floridians are healthy, you do that by making sure that they have a good job. And in order to provide good jobs, we need to not put businesses out of business.” The structure of the jobs argument is this:

 

Premise 1. Something is proposed to protect the health and wellness of consumers or workers from harm caused by a business.

Premise 2. It is claimed that health and wellness come from having a good job.

Premise 3. Business must be in business to provide good jobs.

Premise 4. This something would put business out of business.

Conclusion: This something must be prevented.

 

On the face of it, the reasoning has a certain appeal in that if it were true that something intended to protect health and well being would have the opposite effect, then it should not be done. But are these claims true? The second premise can be seen as true because health insurance is linked to employment and because you generally need a job to get food, shelter, and other survival essentials. Presumably a good job would provide benefits and adequate pay. The third premise is true. The fourth premise is the most critical. Republicans almost always claim that regulating business would put business out of business, despite that fact that businesses have been both regulated and profitable since the start of the United States. This is not to deny that there can be bad regulations, but simply saying that something would put businesses out of business is not enough to prove this is true. For example, the 10-minute break rule would not put a business out of business. In fact, allowing such breaks would be more likely to increase productivity of workers since it would allow them time to recover somewhat from the heat.

But it might be objected that some local governments might put requirements into effect that would put business out of business and hence this law is needed to prevent that from happening. My first reply is to point out that another professed Republican principle is that they are for small government, and this would mean not expanding government by creating more laws unless there is a clear need. But the proposals seem quite reasonable and unlikely to destroy businesses. Now if some county went rogue and started a war on capitalism, then perhaps such a law would be needed. My second reply is to note that Florida essentially did nothing about the increasing danger presented by heat and is only complying with the OSHA requirements that amount to businesses mostly not being allowed to kill or harm workers. That is, Florida is doing the least it can possibly do to address the increasing danger presented by heat and ensuring that no one in the state can do more. While this is presented as pro-business and “not having more heat protection is good for the workers, actually” it also seems to be an act of cruelty, which is consistent with what seems to be a true principle of the Republican party, namely cruelty for the sake of cruelty.

 

As a philosopher, my interest in AI tends to focus on metaphysics (philosophy of mind), epistemology (the problem of other minds) and ethics rather than on economics. My academic interest goes back to my participation as an undergraduate in a faculty-student debate on AI back in the 1980s, although my interest in science fiction versions arose much earlier. While “intelligence” is difficult to define, the debate focused on whether a machine could be built with a mental capacity analogous to that of a human. We also had some discussion about how AI could be used or misused, and science fiction had already explored the idea of thinking machines taking human jobs. While AI research and philosophical discussion never went away, it was not until recently that AI was given headlines, mainly because it was being aggressively pushed as the next big thing after driverless cars fizzled out of the news.

While AI technology has improved dramatically from the 1980s, we do not have the sort of AI we debated about, namely that on par with (or greater than) a human. As Dr. Emily Bender pointed out, the current text generators are stochastic parrots. While AI has been hyped and made into a thing of terror, it is not really that good at doing its one job. One obvious problem is hallucination, which is a fancy way of saying that the probabilistically generated text fails to match the actual world. A while ago, I tested this out by asking ChatGPT for my biography. While I am not famous, my information is readily available on the internet and a human could put together an accurate biography in a few minutes using Google. ChatGPT’s hallucinated a version of me that I would love to meet; that guy is amazing. Much more seriously, AI can do things like make up legal cases when lawyers foolishly rely on it to do their work.

Since I am a professor, you can certainly guess that my main encounters with AI are in the form of students turning in AI generated papers. When ChatGPT was first freely available, I saw my first AI generated papers in my Ethics class, and most were papers on the ethics of cheating. Ironically, even before AI that topic has always been the one with the most plagiarized papers. As I told my students, I did not fail a paper just because it was AI generated, the papers failed themselves just by being bad. To be fair to the AI systems, some of this can be attributed to the difficulty of writing good prompts for the AI to use. However, even with some effort at crafting prompts, the limits of the current AI are readily available. I, of course, have heard of AI written works passing exams, getting B grades and so on. But what shows up in my classes is easily detected and fails itself. But to be fair once more, perhaps there are exceptional AI papers that are getting past me. However, my experience has been that AI is bad at writing and it has so far proven easy to address efforts to cheat using it. Since this sort of AI was intended to write, this seems to show the strict limits under which it can perform adequately.

AI was also supposed to revolutionize search, with Microsoft and Google incorporating it into their web searches. In terms of how this is working for us, you just need to try it yourself. Then again, it does seem to be working for Google in that the old Google would give you better results and the new Google is bad in a way that will lead you to view more ads as you try to find what you are looking for. But that is hardly showing that AI is effective in the context of search.

Microsoft has been a major spender on AI and they recently rolled out Copilot into Windows and their apps, such as Edge and Word. The tech press has been generally positive about Copilot and it does seem to have some uses. However, there is the question of whether it is, in fact, something that will be useful (and more importantly) profitable. Out of curiosity, I tried it but failed to find it compelling or useful. But your results might vary.

But there might be useful features, especially since “AI” is defined so broadly that almost any automation seems to count as AI. Which leads to a concern that is both practical and philosophical: what is AI? Back in that 1980s debate we were discussing what they’d probably call general artificial intelligence today, as opposed to what used to be called “expert systems.” Somewhat cynically, “AI” seems to have almost lost meaning and, at the very least, you should wonder what sort of AI (if any) is being referred to when someone talks about AI. This, I think, will help contribute to the possibility of an AI bubble as so many companies try to jam “AI” into as much as possible without much consideration. Which leads to the issue of whether AI is a bubble that will burst.

I, of course, am not an expert on AI economics. However, Ed Zitron presents a solid analysis and argues that there is an AI bubble that is likely to burst. AI seems to be running into the same problem faced by Twitter, Uber and other tech companies, namely that it burns cash and does not show a profit. On the positive side, it does enrich a few people. While Twitter shows that a tech company can hemorrhage money and keep crawling along, it is reasonable to think that there is a limit to how long AI can run at a huge loss before those funding it decide that it is time to stop. The fate of driverless cars provides a good example, especially since driverless cars are a limited form of AI that was supposed to specialize in driving cars.

An obvious objection is to contend that as AI is improved and the costs of using it are addressed, it will bring about the promised AI revolution and the investments will be handsomely rewarded. That is, the bubble will be avoided and instead a solid structure will have been constructed. This just requires finding ways to run the hardware much more economically and breakthroughs in the AI technology itself.  

One obvious reply is that AI is running out of training data (although we humans keep making more everyday) and it is reasonable to question whether enough improvement is likely. That is, AI might have hit a plateau and will not get meaningfully better until there is some breakthrough. Another obvious reply is that there is unlikely to be a radical breakthrough in power generation to enable a significant reduction in the cost of AI. That said, it could be argued that long term investments in solar, wind and nuclear power could lower the cost of running the hardware.

One final concern is the concern that despite all the hype and despite some notable exceptions, AI is just not the sort of thing that most people need or want. That is, it is not a killer product like a smartphone or refrigerator. This is not to deny that AI (or expert) systems have some valuable uses, but that the hype of AI is just that and the bubble will burst soon.

 

Relative to Trump, Biden has a reality problem. Biden’s supporters generally have a realistic view of him, seeing Joe as a well-meaning, decent old man who is probably not up to enduring another four years as President. In contrast, the Trump existing in the minds of his base barely resembles the real Trump, except (ironically) in terms of his worst traits and deeds. Biden also does not have the propaganda machinery of Fox News and its more extreme fellows, and his supporters include people who listen to NPR and check facts. As such a propaganda campaign of disinformation is not an option for poor Joe.

While I am wary of conspiracy theories, if we look at Hilary Clinton’s 2016 run and what Biden is doing now, it would not be unreasonable to think that the ruling elites of the Democratic party are intentionally throwing elections. One could also infer that the party is suffering from an ego problem in that some candidates are unwilling or unable to honestly assess their chances. In any case, the Democrats continue to disappoint, the Republicans seem intent on turning America into a white Christian nationalist authoritarian oligarchy griftocracy, and no third party is up to the task of challenging them. Given my values, which I am happy to debate, Biden is still by far the better choice. While I do think that even a fully senile Biden would be better than Trump, my main reason for supporting Biden is, well, everything else that goes with the presidency. While Biden and the Democrats do ably serve the ruling elites, they also endeavor to make things less bad for everyone else and value competence to some degree. Trump, if he follows the Project 2025 plan, will be creating that white Christian nationalist authoritarian oligarchy griftocracy. This will be bad for everyone, including white Christians, who are not economic elites who have the resources to endure the harm this project will inflict. So how can Biden win?

Interestingly, the Supreme Court just gave Biden the tool he needs to easily win, if he were only the sort of person Trump and Fox News claim he is. As Justice Sonia Sotomayor noted, the ruling on presidential immunity would have the following effect: “Orders the Navy’s SEAL Team 6 to assassinate a political rival? Immune,” she wrote. “Organizes a military coup to hold onto power? Immune. Takes a bribe in exchange for a pardon? Immune. Immune, immune, immune.” While I am not a constitutional scholar, based on the text of the ruling and dissent, Joe could take a wide range of official actions to neutralize Trump and perhaps much of MAGA and ensure he remains in office. Ironically, Trump and his MAGA Supreme Court judges know this is a safe move: unlike Trump, Biden will not do any of these things, even to preserve the United States from the destruction that Trump will bring. But he could and there are presumably those who would argue that he should, for example, send the Joe Commandoes to neutralize Trump and, while they are at it, other key MAGA figures, such as six supreme court justices. But, once again, they know that while Trump will run wild with this ruling, Biden will not—which is yet another reason why Biden should be president rather than Trump. But are there ways for Biden to beat Trump? One option is to use a third-party candidate to pull votes from Trump.

While third-party candidates have proven useful in winning elections, there are moral questions about intentionally using this tactic. One concern is the matter of deceit. Suppose that shadowy Democratic party operatives were to support, for example, RFK in ways that would draw votes away from Trump. This raises stock moral concerns about deception and manipulation. Because of my ethics, I could not endorse this tactic. Fortunately, I can openly encourage people who would otherwise vote for Trump to vote for RFK and do so in an ethical manner by being completely honest. I also openly encourage those Democratic operatives to use this tactic.

Perhaps the only time the MAGA base openly disagreed with Trump and booed him was when he admitted to getting a COVID-19 booster. This indicates that for at least some of the base, their anti-vax ideology is stronger than their MAGA commitment. This presents an opportunity to peel some voters away from Trump.

Trump was initially baffled by the anti-vax sentiments, as were some other Republicans (such as Ron DeSantis), and while they have been happy to change their rhetoric to appeal to these voters, they are not true believers. After all, they all got vaxxed because they knew it would protect them from a dangerous disease. More importantly, one significant achievement of the Trump administration was Operation Warp Speed which resulted in effective vaccines being developed at, well, warp speed. While I generally loath Trump, he and his administration deserve praise for this as despite their other failures, these vaccines saved lives and prevented serious illnesses. So, thank you President Trump for those vaccines. Ironically, this accomplishment can be weaponized against him.

Two of Trump’s many weaknesses are that he loves praise and loves to take credit, as such the success of Operation Warp Speed is something he would very much love to claim. But he also realizes that this objectively good success is seen very differently by his anti-vax base. As such, he has largely stopped talking about it. This, of course, is a situation that can be exploited in a way that allows complete honesty.

Biden and Democrats should praise Trump for the success of Warp Speed and emphasize how he and other Republicans served as role models by taking the COVID-19 vaccines. Unedited, honest clips of him praising the project and recommending the vaccine should be used. But how will this help peel off votes? Fortunately, or unfortunately, RFK is a solid anti-vax candidate who appeals to his fellow conspiracy theorists. That he has admitted to having a worm in his brain presumably only boosts his potential appeal to some elements of the MAGA base. While this is morally dubious at best, Democrats could assist RFK by promoting his anti-vax credentials and contrasting them with Trump’s. To avoid being evil, they would need to steer clear of promoting anti-vax disinformation. This is certainly a viable option since the goal is to get existing anti-vaxxers who would otherwise vote for Trump (but never Biden) to switch to RFK and not to create more anti-vaxxers.  But at this point I think people are probably set in their views on vaccines. There is, of course, a risk of pulling liberal anti-vaxxers away from Biden to RFK and this should be considered before this tactic is used. Fortunately for the Democrats, it is the Republicans who have largely embraced an anti-vax approach within their broader commitment to disinformation and misinformation. As such, this tactic would hurt Trump more than Biden.

While it might be wondered if the effort would be worth it, since this tactic is unlikely to peel off many MAGA voters. However, while Clinton and Biden trounced Trump by millions of votes, the electoral college is such that pulling a few votes away from Trump in key locations could make a difference. Assuming, of course, that votes will even matter in MAGA controlled zones.

 

In June 2024, Oklahoma’s state superintendent mandated that public schools teach the Bible. In a familiar move, the justification is that the Bible is “a necessary historical document to teach our kids about the history of this country, to have a complete understanding of Western civilization, to have an understanding of the basis of our legal system.”

To be fair and balanced, the Bible is an important historical document, and I would go so far as to say that knowing about it (and other major religious texts) is essential to understanding world history. It is also important in my field, philosophy. While I teach at the college level, the same reasoning applies since I teach at a state school.

When I teach Ethics, Metaphysics, Modern and Introduction to Philosophy, I include Biblical content. For example, discussing the Medieval dispute over metaphysical universals requires discussing such topics as original sin and the Trinity. My colleagues also include the Bible in appropriate classes, the most obvious examples being classes on the Old and New Testaments. While K-12 education tends to be weak in the areas of philosophy and religion, the Bible should be covered in appropriate classes—along with other important religious texts. As such, I obviously have no objection against covering the Bible and other religious texts as historic, religious and philosophical documents in the context of academic study. Likewise, I have no objection against including historically important works of atheists, anarchists, and Marxists. These are all important to history and philosophy and should be included.

Naturally, there is always the practical challenge of determining what content to include in courses and we educators can only include a microscopic sliver of all the important works. Ideally, we should make this decision in a principled manner and not based on our own ideology or n0n-academic agenda. As an honest educator, I must admit that we do not magically uplink to the Platonic forms of education when picking our content and our values, biases and experience influence us despite our efforts to build an ideal curriculum. As I somewhat jokingly tell my students when they ask why I included certain philosophers, my response is that we usually teach what we were taught, and this probably goes back to some trivial reason for inclusion. For example, my Modern class is mostly made up of the philosophers that were in the Modern class I took. I did add Mary Wollstonecraft to the class because I had the notes from my Ethics class, and I added her to that class at the suggestion of my ex-wife. But, as noted above, the Bible seems to be an objectively important work. But so does the Communist Manifesto.

There are also concerns about how content should be taught, which is usually framed as a conflict between teaching and preaching (indoctrination). While the right regularly accuses educators of indoctrination, this is not what we do as professionals. And, as professors joke, if we can’t even get the students to read the syllabus or look up from their phones, we are not indoctrinating them to be Islamic Transgender Homosexual Feminist Woke Atheist Socialist Post Modern Tik Tok Marxist Fascist Migrants. As the meme goes, every accusation of the right is a confession. This mandate and numerous laws being passed governing education are clearly aimed at mandating the teaching of a set of values. That is, they are aimed at indoctrination. The right, if one reviews the laws and mandates, is not opposed to indoctrination. What they are opposed to is a lack of indoctrination in their values.

A supporter of this mandate might raise the obvious objection: the mandate does not state that biblical content will be taught as a religion but that it will be taught as an historical document. As such, the mandate is not a problem. While this does have some appeal, there are some problems with it. First, schools can already include the Bible as an historical document, hence there is no need for such a mandate. Second, the mandate is just about the Bible, which is clearly favoring the text of one religion over all others (and non-religious texts). Third, this reply is likely to be a bad-faith reply, since Mr. Walters professed views are quite clear.

While it is obvious why non-religious people and people of faiths other than Christianity would be concerned about these sorts of mandates (and laws), Christians should also be concerned. There are, of course, all the historical arguments made by the Founders for separating church and state. After all, they understood the dangers arising from combining secular power with theological power. They also understood the history of Europe, including the bloody conflicts between sects of Christianity. But there is also a very pragmatic concern. While Christianity is monotheistic, it is not monolithic and sects have been splitting off from it since the beginning. As such, when an official mandates that the Bible be taught, the question arises as to which version of the Bible (will it be yours?) and which interpretation of that Bible will be taught. So, while a person might applaud the mandate, they should not assume that what will be taught will match their version of the Bible or their interpretation. To be fair, a supporter might reasonably believe that this mandate is code for culture war values they probably agree with (such as anti-LGBT views, capitalism, white supremacy, 1800s era gender roles, and misogyny) and they are probably right. But Christians should be concerned that the version of the mandated Bible and its interpretation will conflict with their own faith. For example, Seventh Day Adventists and Catholics presumably do not want the faith of the other sect imposed upon their children in school. But some might see this as better than a lack of biblical lessons.

Of course, if someone wants their children to learn about the bible, most churches offer Sunday school classes and, of course, they have regular sermons that people can attend. As such, it would be absurd to argue that there is some critical lack of biblical education that the state has a compelling reason to address with a mandate.

 

The Supreme Court ruled 6 to 3 that state officials can accept gratuities as rewards for their official actions. To be fair, there are disparities in punishments that should be addressed. A federal official can receive up to fifteen years for accepting a bribe, but the punishments cap at two years for accepting gratuities. The statute the ruling is about applies only to state officials and imposes a 10-year sentence. While inconsistency in punishment is a hallmark of the American legal system, from a moral standpoint sentences should be consistent (and fair). As such, it would be reasonable to make the punishments for accepting gratuities for federal and state officials the same. Unless, of course, there is a relevant difference that would warrant such a disparity. However, the court ruling was not about addressing this inconsistency. Instead, critics claim that the ruling has legalized corruption. To be fair, the ruling seems to have the intent of allowing state and local governments to define what is acceptable as a gratuity. That is, the intent seems to be to allow the people who will receive gratuities to decide what they are allowed to receive.

The ruling rests on a philosophical discussion of the difference between bribery (always corrupt) and gratuities (sometimes corrupt). Justice Brett Kavanaugh wrote that “bribes are payments made or agreed to before an official act in order to influence the official with respect to that future official act.” Gratuities “are typically payments made to an official after an official act as a token of appreciation.” Taking the terms strictly, Kavanaugh seems to be right: a bribe is offered to influence an action, a gratuity is given to reward an action. For example, one might bribe the maître de to get a table and then give the server a gratuity to reward them for good service. Naturally, the payment of the bribe can take place after the action is completed, since the agreement can be made with the payment promised in the future. This would seem to allow for cases claimed to be gratuities to be bribes and this would require showing that an agreement was made that influenced the future action.

One interesting consideration is the likely possibility of iterated gratuity in which an official accepts gratuities for their actions and thus sends a clear signal that they will, in the future, act in ways that will be rewarded by gratuities. Imagine that an official who sees to it that a business gets a lucrative contract to provide school lunches gets a $12,000 gratuity from the business to express their appreciation after the fact. The official now knows they will be rewarded for helping the business. Imagine they then see to it that the business gets a contract providing prison food and is rewarded with a gratuity. Now the business knows the official is amenable to being influenced by gratuities. Strictly speaking, there would be no bribery—it would be analogous to how we train dogs by rewarding them for doing what we want. But it would create a situation indistinguishable from bribery because it would be silent bribery. Everyone would know how the system worked, and no one would need to say anything.  But it might be objected that gratuities can just be rewards and not corruption.

Kavanaugh makes this argument by using what he takes to be innocuous examples. He asks, “could students take their college professor out to Chipotle for an end-of-term celebration? And if so, would it somehow become criminal to take the professor for a steak dinner? Or to treat her to a Hoosiers game?” While he did consider that some gratuities could be “problematic” he provides obviously innocuous examples, such as tipping a mail carrier, a thank you gift basket given to a teacher at the end of the school year, a college dean giving a sweatshirt to a city council member who speaks at a school event. He argues that these examples suggest that “gratuities after the official act are not the same as bribes before the official act,” adding that unlike gratuities, “bribes can corrupt the official act — meaning that the official takes the act for private gain, not for the public good.” Let us consider both these examples and the general argument.

It is interesting that the justice picked the example of a mail carrier, since the USPS has a strict policy about gifts to postal employees. They can receive a gift, but it must be $20 or less per occasion and no more than $50 in a single year. This is obviously much stricter than the rules governing the Supreme Court, which are effectively none. That there is such a limit on postal employees does suggest that there are concerns about allowing large gratuities. And, of course, there is the practical fact that a postal carrier is rather limited in what sort of corruption they can engage in in their official role.

Assuming the local laws allow it, the sweatshirt gift seems morally fine—it is unlikely that an official would engage in corrupt deeds for the sake of a sweatshirt. Also, giving out cheap college merchandise to speakers or people at events is a normal, non-corrupt practice.

The gift basket is somewhat more problematic, depending on how strict the school policies and local laws are. On the one hand, there is almost certainly no intention of corruption. On the other hand, accepting such gifts from the public does signal that one is willing to accept gifts and could open the door to corruption. But I teach at the college level, so I am not that familiar with the rules and ethics at the K-12 level. Which takes us to his professor examples.

So, would it be criminal for students to take their professor out to dinner for an end of term celebration or give them tickets to an event? I infer that the professor in question is a government employee, so the answer would partially depend on the local laws. Distinct from the laws, there are also the matters of university policies (violating these could get the professor fired for cause) and ethical concerns.

Ethically, a professor should not accept dinner or tickets from students, even at the end of the semester. This can create the impression of impropriety and other students might hear of this and think that the professor either expects or will reward students if they think they will receive such gifts. It is especially problematic if the students will take future classes with the professor, since such gifts could influence the professor’s behavior in those classes. Of course, my moral view is that a professor should not profit from their position (beyond their salary and appropriate compensation), even in small ways. The Supreme Court and public officials, who have far more power than us professors, should also follow this moral practice lest they fall into corruption. Well, more corruption.

In terms of policy, schools vary in their guidance. Based on what colleagues around the nation have said, some schools have no clear guidance about small gifts and other schools have strict and precise guidelines. Anecdotally, most schools would frown on students taking professors out to dinner or gifting them tickets. Smaller gifts, like a $16 reusable bag, might be allowed—to use a random example.

 My university has a clear policy about gifts, and we are all required to complete ethics training about gifts every year. The short version is that as a faculty member I must not solicit or accept gifts with the understanding that the gift was given to influence or gain a favorable action or decision from me in my official capacity. Given that I understand that a gratuity given today for past behavior can be aimed at influencing future behavior, I take this as forbidding me from accepting any gifts that might have this nature. For example, I cannot accept any gifts from students since they might be in a future class and the gift might be intended to influence my future behavior.

We are also subject to disclosing outside employment, foreign influence and so on. It is interesting to compare the strict limits I operate under as a professor at a state school to the lack of limits enjoyed by the Supreme Court. But I suppose they are just trying to share the wealth by expanding opportunities for officials to profit from their positions through gratuities. Now to the general argument.

Kavanaugh claims that “gratuities after the official act are not the same as bribes before the official act” and that unlike gratuities, “bribes can corrupt the official act — meaning that the official takes the act for private gain, not for the public good.” So, his argument is that bribes can corrupt since they occur before the act, but gratuities cannot since they take place after the act. Hence, a gratuity cannot corrupt.

In an idealized situation, Kavanaugh’s reasoning would hold. If an official acted with no knowledge or beliefs about how those they benefited would respond and were thus surprised and amazed when those they benefited gave them a gift for acting to their advantage, then there would be no corruption. The official could not have been influenced by a gift they had no idea they might receive.

In reality, officials would be aware that a reward would be forthcoming if they act in certain ways, especially if they (as Clarence Thomas is alleged to have done) regularly receive gratuities as they act in ways that benefit those giving them the gifts. To think otherwise would be to ignore the plain facts or to infer that officials have no conception of the actual world. While it might be hard to prove that a one-time gratuity is a payoff of desired behavior, reiterated gratuities would clearly be bribes. I do not need to tell a dog that he will get a treat if he does what I want, giving him treats when he does what I want takes care of bribing him.  And the Supreme Court has legalized giving treats.

In terms of why this is bad, one obvious reason is that it makes America even more of an oligarchy: people with money seem even more free to simply buy the results they want. Unless you are one of these people, the officials will most likely not be acting in your interest. The second problem is that this could lead to another standard outcome of corruption: it will be more likely that you will have to give gifts to officials to get things done; that is what happens when gifts to officials are legal. Going back to the professor example, if I could and did accept dinners and tickets from students and they saw that students did well in my classes, students would keep giving me dinners and tickets. After all, even though I said nothing, they would know that I expected such gifts and thus my classes would become corrupted. Which is why I, unlike certain Justices, do not accept gifts.

Since the colonial days, America has been a land of stark economic inequality with a relatively stable class structure. The institution of slavery and its enduring effects are the most striking examples. While the economic benefits of slavery were concentrated (as some like to point out, not all whites owned slaves), these benefits generated wealth that has been inherited and built upon. In contrast, the poverty of the victims of slavery was also inherited, providing little or nothing for people to build upon. As such, to grasp one aspect of white privilege (or white advantage) all a person needs are the most basic knowledge of American history and a minimal grasp of how inheritance works. While exceptions should be considered when thinking about generalizations, one needs to be on guard against the fallacies of hasty generalization (drawing a general conclusion based on a sample that is too small) and anecdotal evidence (rejecting statistical data based on an anecdote that is an exception). So, while examples like Obama and Oprah are relevant to discussing race in America, they are but two examples among millions. White poverty has been and is real, but this does not disprove the generality of white advantage. After all, the claim that white privilege or advantage exists is not the claim that every white person is doing well and that everyone else is doing terribly. Rather, it is a claim based on statistical analysis of the entire population.

While part of the American myth is that hard working Americans made themselves into successes by their own hard work, the reality is that there are many notable cases of public resources being used to benefit certain broad segments of the population. These segments have consistently consisted of white Americans while largely excluding others. These have also served to build white advantage. Some examples are as follows.

In 1830 the Indian Removal Act resulted in native people being forced from their lands, which opened the ground for the 1862 Homestead Act which overwhelmingly benefited white settlers.

In 1934 the Federal Housing Administration was created to address the housing shortage in America. It was also intended to segregate housing. It succeeded in both goals, providing many white Americans with the opportunity to own houses while pushing blacks and other minorities into urban housing projects. Home ownership was also subsidized with public money through the mortgage interest deduction. And so home ownership became the engine of American inequality.

 While Social Security is considered a general benefit today, when the Social Security Act of 1935 was passed, it intentionally excluded agricultural and domestic workers, who were mostly Black, Hispanic and Asian. The Wagner Act was also passed in 1935 and it gave unions the ability to engage in collective bargaining and set out consequences for unfair work practices. While unionization helped improve the situations of white workers, non-whites were largely excluded from these benefits. Fortunately, unions have become more diverse and “white union members have lower racial resentment and greater support for policies that benefit African Americans.” These are no doubt additional reasons for the right to try to destroy unions.

After WWII, the G.I. Education Bill, Veteran Administration Housing Authority, and Health Care System provided members of the military and veterans with public support for higher education, housing, and health care. While not the only factor, this public support is seen as the foundation upon which the prosperous American middle class was built. Wealth was redistributed to good effect for those who received it. While not all veterans were white (the United States operated segregated Asian and Black units during the war), most benefits were limited to white veterans and a million black WWII veterans were largely denied these benefits. The federal government and states also invested heavily in public higher education and for a while college was relatively affordable.

When the above examples are brought up in discussions of white privilege, some people counter with three true claims. The first is these lie in the past, the second is that things have changed, and the third is that many white people are not doing well.

While these do lie in the past and things have changed, there is still that fact that the effects of the harms and benefits linger. As noted above in talking about slavery, to understand that white advantage is real one just needs a basic knowledge of American history and a minimal grasp of how inheritance works. For example, grandparents who went to college and got a house from the GI Bill were generally able to pass on that wealth to their children, who then passed on benefits to their children. In contrast, the black veterans who got nothing from the bill had exactly that extra to pass on to their families. There will, of course, be stories of white veterans who ended up with nothing to pass on and examples of black veterans who did very well and were able to pass on wealth; but these are the exceptions. But is true that many white people are not doing well. White Americans are obviously not exempt from the economic woes of today, including low wages, grotesque income inequality, lack of affordable health care, food insecurity, high housing costs and so on.

The average white American can look at these past benefits and point out, correctly, that they are not getting the same benefits. As examples, college and housing are extremely expensive and the state is doing little, if anything, to help the average white person. That is, the entitlements of the past are gone or shifted to benefiting the wealthiest, including politicians. While there are many reasons for this, one is racism, and this can be illustrated by the parable of the pools.

Heather McGhee’s The Sum of Us: What Racism Costs Everyone and How We Can Prosper Together includes the history of the closing of public pools in America because of a racist response to integration. The paradigm pool is the 1919 Fairground Park pool in St. Louis, Missouri, which was believed to be large enough for 10,000 swimmers. In 1949 the city integrated the pool, resulting in the Fairground Park Riot in which whites attacked every black person they saw in the area. The pool was segregated again, then desegregated by a NAACP lawsuit. Visits to the pool declined dramatically and the city closed and drained the pool. While this was one example, the closing of public pools to avoid integration led to a case that reached the Supreme Court. The court ruled, in the 1971 Palmer v. Thompson, that closing a pool rather than integrating it was constitutional. Roughly put, closing a pool hurt everyone and hence was not based on racism. While swimming remained (and remains) popular, public pools largely declined in favor of backyard pools and segregated swim clubs. This ended up hurting everyone and set the stage for the harm that followed.

That many white people would accept losing a benefit rather than allowing it to be shared and that denying a benefit to everyone was constitutional did not go unnoticed. In the years to follow, public benefits were subject to cuts for everyone and the propaganda campaigns against them typically included racist elements. Under Ronald Reagan, the United States saw racism employed to get white Americans to accept cuts in entitlements, social programs, and other public expenditures that once  benefited Americans broadly. Bill Clinton kept Reaganomics going and with few exceptions American economic and political policy (and law) has been focused on ensuring that wealth is consistently distributed from the lower classes to the wealthiest classes. When there are attempts to change this siphoning of wealth, these are countered with the usual arguments and rhetoric, including appeals to racism.

Ironically, racism was and is used to get white Americans to agree to policies and laws that hurt everyone (but the rich) including themselves. This is still consistent with white privilege, since whites still enjoy other privileges and the benefits accrued by past generations have not been completely eroded. But the young white people who are trying to pay rent, go to college, or even meet basic expenses are realizing that the system is harming them, and this can lead to the bizarre situation where some people argue that there is no racism or white privilege because white people are suffering. But part of their suffering is due to racism and its role in destroying so many public goods. The money is, of course, still there—it just gets funneled upwards and helps explain why we have so many millionaires and billionaires today.

As the wealth acquired by whites in the past is eroded by the need to use those resources, I wonder what impact this will have. While the right has been exploiting white economic worries, they are also committed to racism. As such, even if they wanted to restore public benefits (which they do not) they would have to give up their racism. The establishment Democrats are also largely committed to the status quo, although they are more willing to allow public funds to benefit the public.  It will be interesting, and probably terrifying, to see the outcome of this—although history does over some suggestions.

In what seems to be a victory for Christian Nationalists, the Ten Commandments must now be displayed in Louisiana public classrooms. This law will be challenged, but its proponents are hoping that the Supreme Court will rule in its favor. Given the ideology and religious views of the majority of the court, this victory is all but assured.  

The 2022 Kennedy v. Bremerton School District ruling provides guidance here as the court ruled in favor of a high school football coach who was fired for praying on the field. The court decided that the prayer was private speech and hence protected. Meanwhile, Republicans in Florida are arguing that “in the classroom, the professor’s speech is the government’s speech…” when it is speech they do not like.  It would be interesting to see what they would say about professors praying in classrooms; I suspect that if it was a suitable Christian prayer, it would be considered private speech.

While I am not a legal scholar, there does seem to be an obvious difference between a coach engaging in a private prayer on the field and a state mandating that the Ten Commandments be displayed in all classrooms. If, for example, a teacher or professor wanted to carry a copy of the Ten Commandments to draw inspiration from before teaching or during committee meetings, that would obviously not present any issues. I, in fact, have a copy of the Ten Commandments in my Ethics class notes since I do a section on religion and ethics. In this context I am using the Ten Commandments as an example of religious ethics rather than proselytizing a specific faith in the classroom, since we are not in the indoctrination business. Coincidentally, this is a work around that proponents of the law have also attempted to use.

As the separation of church and state is well-established, proponents of the law need a narrative that will allow the Ten Commandments to be displayed while they can insist this is not the state promoting a religion. One approach is built on the same justification I use to cover the Ten Commandments in my class: the Ten Commandments are an important part of legal (and moral) history and hence should be included in the relevant lessons. I certainly would not think of teaching a basic ethics class without including them in a section on religious rules-based ethics. Likewise, my colleagues in religion and history would not think to exclude them from the relevant classes. But there are two obvious differences.

One is that academic coverage of the Ten Commandments does not require a state mandate that they be displayed in all classrooms. Providing them to the students in the text, PowerPoints or notes suffices. The second is that my colleagues and I are not, as I noted earlier, in the business of indoctrinating students. In fact, students routinely ask us what we think, since we are careful not to preach our own views. When discussing paper topics, I stress that they should argue for their position and not try to argue for what they think I might think. When grading, I take care to separate my view of their position from a fair assessment of the quality of their work. As I tell my students, people have gotten an A on papers arguing for positions I strongly disagree with, and others have done badly by arguing badly for positions I agree with. I never tell them these positions and stick to generalities.

The clever counter to this is that the law has an amendment that permits display of historical documents such as the Mayflower Compact, the Declaration of Independence and the Northwest Ordinance. Presumably the intent is to try to persuade people that the Ten Commandments is just being displayed as an historically important document and hence all the concerns about the separation of church and state are unfounded. But the obvious problem is that only the display of the Ten Commandments is mandated by law (and a specific version, at that). But even if the law required other documents to be displayed, it would still be reasonable to consider why the Ten Commandments and these other documents were being mandated for display. They did not, for example, mandate that specific content from mathematics, science, or English literature be displayed in classrooms, even those that are foundational. If they were really concerned that classrooms display important documents, they would have presumably included such content in the law.  But maybe that will be the next move to conceal their intentions.

Interestingly, this move does send an unintended message about the Ten Commandments. If we take seriously the argument that they are being displayed just because they are historically important and not for religious reasons, then the message to students is that that they are just historically important, on par with the Mayflower Compact, the Declaration of Independence and the Northwest Ordinance. They are perhaps not the word of God given to Moses by God.  As such, they should be subject to the same academic assessment as any other historical document and subject to the same criticism as any other legal works created by flawed humans for human purposes. The schools should also display other historically important documents, such as select quotes from Marxists, Muslims, Buddhists, socialists, atheists, anarchists, Satanists and others. After all, if it really is about displaying important documents, there are many that deserve a place alongside the Ten Commandments. But it is evident and obvious what the intent of the law is, and it has nothing to do with presenting students with historically important documents.

 

While philosophers and religious thinkers have taken past lives seriously, it is usually assumed that serious scientists are happy to leave them to it. But the University of Virginia School of Medicine has applied the scientific method to this matter and has found interesting evidence that cannot be dismissed out of hand. Recently, the Washington Post did a thoughtful piece on this subject, looking at the evidence in a critical but balanced manner.

The method of testing the possibility of a past life, or at least the possession of memories from before a person was born goes back at least to Socrates. In the Meno, Socrates endeavors to argue for his doctrine of recollection. He claims that knowledge of such things as geometry and the Forms are acquired by the soul before it is embodied. People forget that they have this knowledge, but it can be restored by philosophical discussion. This, as I tell my students, can be seen as like losing files on your PC due to some corruption and then restoring them with a utility.

In the dialogue, Socrates walks Meno’s slave through a geometric exercise Because, according to Socrates, the slave did not learn geometry in this life, he must have learned it before he was born—while his soul was communing with the forms.

This, then, is the test for past lives: if a person has knowledge of a past life that they could not have acquired in this life, then that counts as evidence for a past life. Other factors, such as behavioral changes, can also serve as evidence. The Washington Post article does provide examples of cases that seem to provide evidence of such knowledge and behavior. Perhaps the best-known case is that of James Leininger. Dr. Jim Tucker provides a detailed analysis of the evidence and considers alternative explanations.

Going back to Socrates, critics respond to his argument by claiming he guided the slave through the exercise and is thus supplying the knowledge in a way that does not require any prior existence. The same concern applies to evidence of past lives: a person could be asked leading or guiding questions that make it appear that they have such knowledge. This is not to accuse people of deceit; this could happen without any such intention. But, of course, fraud is also a matter of concern. The credible investigations consider both these possibilities, and they should be given due consideration. As Hume said about miracles, we know that people lie and that can often be the most plausible explanation. Less harshly, we also know that people can unintentionally ask leading and guiding questions while we don’t know if people have past lives. So that explanation is, by default, the favored explanation until it is overturned.

Another obvious concern is that with the internet, a child could learn information that they present in a way that might seem like they are recalling a past life. Children also often pretend they are other people, be it a type of person or a specific person. The challenge is determining whether the child could have plausibly found the information and whether the behavior that seems to indicate a different personality is a matter of play or something else. By Occam’s Razor, the explanations that do not require metaphysical commitments have an initial advantage. But there are certainly metaphysical matters to consider.

Socrates presents what could be considered the standard version of reincarnation: a person is a soul, and the soul has a means of storing memories across lives. When a soul is reborn, it (might) recall some of these memories. While Socrates focused on things like the Forms, these could be mundane memories from a past life. As there are many competing accounts of the metaphysics of personhood, memory, and identity, these would all need to be considered and assessed. For example, Hume dispenses with the soul in favor of the idea that the self is a bundle of perceptions (before he concludes this matter is just a dispute over grammar). Memories are just stored perceptions, and these presumably could end up being part of a new person (or a continuation of the old).

John Locke explicitly talks about consciousness persisting or not doing so, so his theory would allow for the possibility of reincarnation. Buddhism also has a metaphysics that allows for reincarnation, albeit in a way that involves no self.

Interestingly, Dr. Tucker’s paper presents “thought bundles” or “thought pools” as possible explanations of these past life memories. The idea is that a living person connects to these bundles or pools and somehow taps the information in them. In terms of a metaphysical foundation, these could be Hume’s bundles or perhaps the remnants of a Lockean consciousness. These bundles or pools do raise many questions, such as what they are, how they would persist and how a person would access them. That said, the human brain is a known storage system for such information, and we routinely transfer information—you are experiencing this right now as you read this. But due skepticism is wise here and the idea of thought bundles existing like lost smartphones and being accessed by a mental 5G is one that should only be accepted based on adequate evidence. After all, this would seem to require that people have a form of psionics that allows them to access such information. While not impossible, since we know information can be transmitted, there does not seem to be much credible evidence for this.

In closing, as there is some credible evidence of this sort of special knowledge and metaphysical theories advanced by philosophers that would allow past lives, then this matter is worthy of due consideration.

 

Almost as if to prove that anything can become a front in the culture war, milk is part of the endless battle. Back in 2017, white supremacists were chugging milk as a demonstration of their whiteness and some said that “if you can’t drink milk, you have to go back.” In terms of making some sense of this, they were basing this claim on the ability to digest lactose as an adult being a genetic trait known to be more common in white people than others. Unfortunately for the white supremacists, this trait is also found among cattle breeders in East Africa. While this milk chugging seems to have calmed down, the milk war continues. In fact, this war has been fought for a long time and the focus of the fight is on raw milk.

Raw milk is exactly what it sounds like: it is milk that has not been “cooked.” In the case of milk, “cooking” is pasteurization, which is intended to sterilize the milk. In the beginning, all milk was raw milk. Obviously enough, the main reason to pasteurize milk is to make the milk safer to drink. Before pasteurization, people (usually infants) could die from drinking the milk. It is estimated that in 1858 at least 8,000 infants died in New York City alone from consuming unpasteurized “swill milk.” As pasteurization became widespread and required by law, the consumption of raw milk declined dramatically. But consumption never ceased.

As the organic food movement grew in the United States, raw milk enjoyed some popularity with liberals and was sold at Whole Foods. While Whole Foods has endured, liberals have largely moved on from raw milk. It has now been embraced by some conservatives, which makes sense.

Like pre-Trump conservatives, current conservatives favor deregulation of industry. Removing pasteurization requirements is deregulation, although the dairy industry has generally favored this requirement. Most current conservatives have embraced a distrust of expertise and dislike government telling them what to do. Health experts, as would be expected, say that consuming raw milk is risky and back up this claim with evidence. As would be expected, this simply motivates some people on the right to want raw milk even more, since they distrust these experts and see consuming it as an act of defiance.

In something of a flashback to our last pandemic, a virus has jumped species and presents a threat to human health. This latest virus is avian influenza (bird flu) and it has infected cows and even a very few humans. While this will probably not lead to another pandemic, it is rational to be on guard against allowing yet another strain of flu to spread.

Fortunately, pasteurization kills the flu virus, making milk from infected cows safe to drink. Raw milk, however, can contain the live virus and infect people which is why experts have warned people not to drink it. This is basic grade school science; I remember learning about pasteurization and pathogens and doing an experiment in which we boiled water to kill bacteria. It is also basic food safety: washing foods and heating up certain foods you cannot wash are basic kitchen safety. People do get sick from drinking raw milk. Despite this, Alex Clark of Turning Point saw this as an opportunity to “trigger the left” and sell “got raw milk?” shirts. The original shirts featured a bull, leading to some mockery. But people do advance arguments in support of raw milk consumption.

One argument is based on the claim that raw milk has health benefits that pasteurized milk lacks. While pasteurization does affect milk, milk is also fortified with vitamins and there is no evidence that raw milk has any special health benefits. It is also sometimes claimed that pasteurization involves putting chemicals in milk, and hence raw milk is better because of the lack of chemicals. While chemicals in foods is a real problem, pasteurization is just a process of heating the milk and does not involve chemicals.

Proponents of raw milk also point out that people get sick from contaminated vegetables and yet the government allows the sale and consumption of raw vegetables. The point seems to be that this shows that raw milk should be legal to sell. Ironically, this provides a reason for stronger regulation of foods and more inspections to check for contamination. After all, pointing out that people are getting ill from food is not a reason to reduce food safety, but a reason to increase it. Less regulation, as history shows, means that food is less safe.

I think that the best argument for allowing the sale of raw milk is the freedom of self-harm argument. J.S. Mill makes a reasonable case that a person’s liberty should not be limited except to protect others from harm. While we should try to persuade people to make good choices, if they are only hurting themselves, we do not have the right to restrict them. As long as the raw milk comes with the appropriate warning labels and people are able to make an informed choice to consume it, then they should be allowed to do so. That said, there are some concerns about this freedom.

One concern is that some people will not be making an informed choice because of the false claims being spread about raw milk and pasteurized milk. These false claims can harm people, which means that by Mill’s view of liberty it would be morally acceptable to restrict the spreading of these untruths. This can, obviously, be countered by the claim that they have a right to express their opinions even when they are wrong and potentially dangerous.  But if the consumer understands that raw milk comes with risks and does not have all the claimed benefits, then they have the right to consume it. While folks on the right would agree with me that they should be able to drink raw milk, they would probably oppose my view that people should not lie about raw milk (or lie in general).

A second concern is a general problem with drawing the boundaries of harm.  If Alex chugs some raw milk and gets sick but can recover on his own or pay his hospital bill, they have only harmed themselves. But if Alex chugs raw milk, gets infected with bird flu, and spreads it to their grandparents who die of it, then they have harmed others and they do not have a moral right to spread disease. Given the views expressed by many on the right during the last pandemic, they would disagree with me on this limit—they would either claim that the risk is made up or that they have the right to put other people at risk in this way.

In closing, the battle over milk might seem weird, but it makes perfect sense when you understand the modern right. It will be interesting to see what battleground they choose next