Small. Silent. Deadly. The perfect assassin or security system for the budget conscious. Send a few after your enemy. Have a few lurking about in security areas. Make your enemies afraid. Why drop a bundle on a bug, when you can have a Tarantula?

-Adrek Robotics Mini-Cyberform Model A-2 “Tarantula” sales blurb, Chromebook Volume 3.

 

Remote controlled or autonomous mechanical assassins are a staple of science fiction. The first one I read about was the hunter seeker in Frank Herbert’s Dune. This murder machine was guided to a target to kill them with a poison needle. This idea stuck with me and, when I was making Ramen noodle money writing for role-playing games, I came up with (and sold) the idea for three remote controlled killers produced by my evil, but entirely imaginary, company called Adrek Robotics. These included the spider-like Tarantula, the aptly named Centipede and the unpleasant Beetle. These killers were refined versions of machines I had deployed, much to the horror of my players, in various Traveller campaigns in the 1980s. To this day, one player carefully checks toilets before using them.

These machines, in my fictional worlds, work in a straightforward manner. They are relatively small robots armed with compact, but lethal and vicious, weapon systems such as poison injecting needles. These machines can operate autonomously, or as the description in Chromebook Volume 3 notes, remotely controlled by a human or AI. Their small size allows them to infiltrate and kill or spy. Not surprisingly, clever ways were thought up to get them to their targets, ranging from mailing them with a shipment of parts or hiding them in baked goods (the murder muffin).

While, as far as I know, no real company is cranking out actual Tarantulas, the technology does exist to create a basic model of my beloved killer spider. As might be imagined, such little assassins raise some concerns.

Some concerns are practical in nature and relate to law enforcement, safety and military operations. Such little assassins would be easy to deploy against specific targets. Or random targets when used as weapons of terror. Imagine knowing that a killer machine could pop out of your cake or be waiting in your toilet and they could be difficult or impossible to trace. Presumably governments, criminals and terrorists would not include serial numbers or other identifying marks on their killers, unless they wanted to take credit.

Obviously enough, people can already easily kill each other. What such machines would change is that they would allow anonymous killing from a distance at very low cost. It is the anonymous and low-cost aspects that are the most worrisome regarding safety. After all, what often deters people from bad behavior is fear of being caught and punished. What also deters people is the cost of doing bad things. Using a terrorism example, sending people to the United States to engage in terrorism could be costly and risky. Putting some little assassins, perhaps equipped to distribute a highly infectious disease, in a shipping container would be cheap and without much risk to the terrorist.

There are also moral concerns. In general, the ethics of using little assassins to murder people is clear as it falls under the established ethics of murder and assassination. That is, they are generally wrong. There are, of course, the stock moral arguments for assassination. Or, as some prefer to call it, targeted killing.

One moral argument in favor of states using little assassins is based on their potential for precision. At this time, the United States usually assassinates targets with missiles fired from drones. While this is morally superior to bombing an area, a little assassin would be even better. After all, a little assassin would kill only the target, thus avoiding collateral damage and the collateral murder. Of course, there is still the broader ethical concern about states engaging in assassination. But this issue is distinct from the specific ethics of little assassins.

Somewhat oddly, the same argument can be advanced in favor of using little assassins in criminal activities. While such activities would (usually) still be wrong, a precise kill is morally preferable to, for example, firing bullets into crowd to hit a target.

In addition to the ethics of using such machines, there is also the ethics of producing them. Drones can easily be modified for lethal purposes. For example, a hobby drone could have a homemade bomb attached. In such cases, the manufacturer would be no more morally culpable than a car manufacturer whose car was used to run someone over. And, of course, weaponized drones are already in production.

While civilians can buy weapons, it is hard to justify civilian sales of lethal drones. After all, they do not seem to be needed for legitimate self-defense, hunting or for legitimate recreational activity. Although piloting a drone in a recreational dogfight would be fun. However, being a science fiction writer, I can easily imagine the NRA pushing hard against laws restricting the ownership of lethal drones. After all, the only thing that can stop an bad guy with a drone is a good guy with a drone. Or so it might be claimed.

Although I do dearly love my little assassins, I would prefer them to remain in the realm of fiction. However, if they are not already being deployed, it is but a matter of time. So, check your toilet. And your baked goods.

On what had been a pleasant morning run, I saw a man with a machete emerge from the woods. He yelled at me, then started sprinting in my direction. I felt an instant of fear, for I know the damage a machete can do to the human body. Then cold clarity took over, as it always does in times of danger. I have faith in my speed and endurance, but my speed failed me that day: the man caught up to me with shocking speed.  I spun to face him, crazily hearing the line from One Piece that “scars on the back are a swordsman’s shame.” More rationally, I knew that death was almost certain if he was able to hack at my back.

A past misfortune in the park suddenly seemed fortunate. In 2023 my dog Remy was attacked by another dog and I, contrary to the advice of experts, intervened. Remy got a trip to the emergency vet; afterwards I took myself to the ER where I was treated for rabies and had my hand x-rayed to check for teeth fragments. After that, I always walk and run with (legal) weapons. As such, I was well armed when the man caught up to me. I, of course, deployed my best weapon: I spoke to him.

He stopped and lowered what I had seen as a machete. While he was armed, it was with just a hefty stick. To this day, I use this as an example of how our perceptions can be mistaken when we are afraid—I perceived the metal blade of a machete when it was just a stick. I was still concerned, but fighting a man armed with a stick is different than facing a foe with a machete.

After a few minutes of confusing conversation, he explained that someone had stolen his laptop from his apartment and fled towards the park. I assured him that I did not have his laptop. He then set off at a jog to find the perpetrator. I, of course, had to follow him. He immediately ran into another runner, and I helped convince him that the runner did not steal his laptop. He moved on, and I ran with him, to protect him and others he might encounter.  Eventually he calmed down and said he was going home; I said farewell and finished my run.

 I briefly thought about contacting the police, but I feared for his safety. Like anyone who follows the news, I knew that there would be a chance that if an officer saw him with a stick, they would be “afraid for their life” and shoot him.  They might, as I did, perceive him as armed with a machete or even a gun. I never saw him again, but I hope he is okay and that he has a new laptop.

Since that incident, I have thought about my philosophy of violence, working out my principles. Each new episode of violence in the news, such as when ICE agents kill people, sets me thinking about a philosophy of violence again. I have, obviously, decided to start writing up my philosophy of violence. But I will begin with my backstory to provide context and to help me better understand my biases.

I grew up in a small Maine town, far from wars and criminal violence. That said, my backstory includes familiarity with the ways of violence. When I was a kid, my parents worked at a summer camp. One perk was that my sister and I were able to participate in the activities as if we were paying campers. I like to joke that I was trained in medieval warfare: I was taught fencing by an Olympic medalist and trained in horseback riding and archery. While this was sports rather than violence, my experience in fencing taught me about facing another person in combat.

I started shooting BB guns early on, then real guns as soon as my dad allowed that. I was soon hunting and was thus made familiar with guns and killing animals. I have been shooting my whole life, so I am comfortable with guns and noise. I was also properly trained in the responsibilities one takes on when one is armed.

When I started playing Dungeons & Dragons, I had the unwise idea that me and my friends should make our own weapons and fight each other for real. I had a wooden shield, a flail and a wooden sword while my friend Mike favored a croquet mallet as a Warhammer. While there were injuries, this was not real battle (although there was blood and stitches)—but it did increase my familiarity with being in a fight.

While running is useful for escaping fights, it also had a calming effect on me, shaping my disposition and allowing me to endure pain and discomfort.

When I started graduate school in 1993, I decided to earn my black belt in Tae Kwon Do and did so just before I completed my doctorate. That made me even more familiar with fighting and I continue to train to this day.  The meditation and moral aspects of the training are also critical, enhancing the pain tolerance and calmness  arising from my running. I am, of course, a philosopher—so talking is a core skill for me.

Because of my background, I was well suited for that encounter. Although the attempt to run away failed, that turned out to be for the best. Because of my experience and training, my reason remained in control during the encounter—fear and anger did not become my masters. And these are terrible masters, for they can lead us to unnecessary violence. While I was not sure I could have won the fight, should it have come to that, my competence in violence gave me the confidence to choose not to use it. This might strike some as odd, but my experience has been that the stronger a person truly is, the less inclined they are to use violence. Needless violence seems to arise most often from the fear of those who think themselves strong but know they are weak, the anger of those who lack self-discipline and those ruled by vices such as cruelty. I do my best not to be that sort of person, for they can easily act like monsters.

I provide this backstory, as noted above, to set the stage for the discussions to follow in which I develop my philosophy of violence. I am writing from my own biased perspective and part of sorting out my philosophy of violence is trying to see how my backstory is shaping (or distorting) my view. In the following essays, I will develop my religious view of the ethics of violence and my moral view of violence.

The scene is a bakery in a small town in Indiana. Ralph and Sally, a married couple, run the Straight Bakery with the aid of the pretty young Ruth. Dr. Janet and her fiancé Andrea enter the shop, looking to buy a cake.

Sally greets them with a pleasant smile, which quickly fades when she finds out that Janet and Andrea are a lesbian couple. Pointing at the door, she says “baking you a wedding cake would violate my religious beliefs. Go find Satan’s baker! Leave now!” The couple leave the shop, planning to drive to the next town as their small town has but one bakery.

At the end of the day, Sally leaves the shop. Ralph says he will help Ruth close the shop. After Sally leaves, Ralph and Ruth indulge in some casual adultery. As God intended.

 

Back in 2015 Indiana got nation attention for its version of the Religious Freedom Restoration Act. The bill was aimed at preventing state and local governments in Indiana from “substantially burdening” the exercise of religion unless it can be proven the state has a compelling interest and is using the least restrictive means for acting on that interest.

Proponents claimed it was to protect people, such as business owners, with strong religious beliefs from the intrusion of the state. Those who opposed it noted it would legalize discrimination and that it was aimed at gays and lesbians. Many other states have similar laws, but some have laws that protect people from discrimination based on sexual orientation.

Since such laws cannot (yet) specify individual religions for protection, they sometimes have interesting consequences, possibly involving Satanism, as happened in my adopted state of Florida. While the legal aspects of such laws are of great concern, as a philosopher my main concern is with the ethics of the matter.

On the face of it, religious freedom seems good as it falls under the broader liberty of thought and belief (which is ably supported by Mill in his work on liberty). As such, these sorts of religious freedom laws seem to be a morally reasonable defense of a well-established right.

But these laws, as opponents argue, allow people to discriminate, provided it can be justified on religious grounds. The law cannot, obviously, require that a religion be true, rational, consistent, sensible or even sane as all religions are equally protected. This, of course, could lead to some serious consequences.

Driving home, Sally’s car is struck by a delivery van, and she is badly injured. Luckily, Dr. Janet and Andrea (a trained nurse) pull over to see if they can help. As Dr. Janet and Andrea rush to help, they see it is Sally. Dr. Janet, a devout member of the Church of Relentless Tolerance, has sworn to God that she will not treat any straight bigots. Looking down at the dying Sally, Dr. Janet says “saving you would violate my sincerely held religious beliefs. Sorry. Perhaps you can find another doctor.” Sally dies.

The obvious counter to this sort of scenario is that religious freedom does not grant a person the liberty to deny a person an essential service, such as medical treatment. Using the standard principle of harm as a limit on liberty, the freedom of religion ends when it would cause unwarranted harm to another person. It could also be argued that the moral obligation to others would override the religious freedom of a person, compelling her to act even against her religious beliefs. If so, it would be wrong of Dr. Janet and Andrea to let Sally die. This, of course, rests on either the assumption that harm overrides liberty or the assumption that obligations override liberty. There are well-established and reasonable arguments against both assumptions. That said, it would certainly seem that the state would have a compelling interest in not allowing doctors, pharmacists, and others to allow people to die or suffer harm because of their religious beliefs. But, perhaps, religious freedom trumps all these considerations.

After having a good time with Ruth, Ralph showers away the evidence of his sins and then heads for home. Ruth helps herself to money from the register and adjusts the spreadsheet on the business PC to cover up her theft.

Ralph is horrified to learn that Sally has been killed. He takes her to the only funeral home in town, run by the Marsh family (who moved there from Innsmouth). Unfortunately for Ralph, the Marsh family members are devoted worshippers of Dagon and their religious beliefs forbid them from providing their services to Christians. After being ejected from the property, Ralph tries to drive Sally’s body to the next town, but his truck breaks down.

He finds that the nearest shop is Mohamed’s Motors, a Muslim owned business. Bob, the tow truck driver, says that while he is generally fine with Christians, he is unwilling to tow a Christian’s truck. He does recommend his friend Charlie, a Jewish tow truck driver who is willing to tow Christians, if it is not on the Sabbath and the Christian is not a bigot.  Ralph cries out to God at the injustices he has suffered, forgetting that he has reaped what he has sown.

In the case of these sorts of important, but not essential, services it could be argued that people would have the right to discriminate. After all, while the person would be inconvenienced (perhaps extremely so), the harm would not be enough to make the refusal morally wrong. That is, while it would be nice of Bob to tow Ralph’s truck, it would not be wrong for him to refuse, and he is under no obligation to do so. It might, of course, be a bad business decision. But that is another matter entirely.

If appeals to harm and obligations fail, then another option is to argue from the social contract. The idea is that people who have businesses or provide services do not exist in a social vacuum: they operate within society. In return for the various goods of society (police protection, protection of the laws, social rights and so on) they are required to render their services and provide their goods to all the members of civil society without discrimination. This does not require that they like their customers or approve of them. Rather, it requires that they honor the tacit social contract: in return for the goods of society that allow one to operate a business, one must provide goods and services to all members of the society. That is the deal one makes when one operates a business in a democratic society that professes liberty and justice for all.

Obviously, people do have the right to refuse goods and services under certain conditions. For example, if a customer went into Ralph & Ruth’s Bakery (Ralph moved on quickly) and insulted Ralph, urinated on the floor and demanded a free cake, Ruth would be justified in refusing to make him a cake. After all, his behavior would warrant such treatment. However, refusing a well-behaved customer because she is gay, black, Christian, or a woman would not be justified. This is because those qualities are not morally relevant to refusing services. Most importantly, freedom of religion is not freedom to discriminate. Despite what some judges think.

It might be countered that the government has no right to force a Christian to bake a wedding cake for a gay couple. This is true, in that the person can elect to close his business rather than bake the cake. However, he does not have the moral right to operate a business within civil society if he is going to unjustly discriminate against members of that society. So, in that sense, the state does have the right to force a Christian to bake a wedding cake for a gay couple, just as it can force him to bake a cake for a mixed-race couple, a Jewish couple, or an atheist couple.

 

 

 

While the notion of punishing machines for misdeeds has received some attention in science fiction, it seems worthwhile to take a brief philosophical look at this matter. This is because the future, or so some rather smart people claim, will see the rise of intelligent machine, machines that do things that would be misdeeds or crimes if committed by a human.

In general, punishment is aimed at one or more of these goals: retribution, rehabilitation, or deterrence. Each will be considered in turn in the context of machines.

Roughly put, punishment for the purpose of retribution is aimed at paying an agent back for wrongdoing. This can be seen as a form of balancing the books: the punishment inflicted on the agent is supposed to pay the debt it incurred by its wrongdoing. Reparation can, to be a bit sloppy, be included under retaliation, at least in the sense of the repayment of a debt incurred by the commission of a misdeed.

While a machine can be damaged or destroyed, there is the question about whether it can be the target of retribution. After all, while a human might kick her car for breaking down or smash his can opener for cutting his finger, it would be odd to consider this retributive punishment. This is because retribution requires that a wrong has been done by an agent, which is different from the mere infliction of harm. Intuitively, a piece of glass can cut your foot, but it cannot wrong you.

If a machine can be an agent, which was discussed in an earlier essay, then it could do wrong and be a target for retribution. However, even if a machine had agency, there is still the question of whether retribution would apply. After all, retribution requires more than just agency on the part of the target. It also requires that the target can suffer from the payback. On the face of it, a machine that could not suffer would not be subject to retribution as retribution is based on doing a “righteous wrong” to the target. To illustrate, suppose that an android injured a human, costing him his left eye. In retribution, the android’s left eye is removed. But the android does not suffer as it does not feel any pain and is not bothered by the removal of its eye. As such, the retribution would be pointless, and the books would not be balanced.

This could be countered by arguing that the target of the retribution need not suffer as what is required is the right sort of balancing of books, so to speak. So, in the android case, removal of the android’s eye would suffice, even if the android did not suffer. This does have some appeal since retribution against humans does not always require that the human suffer. For example, a human might break another human’s iPad and have her iPad broken in turn but not care at all. The requirements of retribution would seem to have been met, despite the lack of suffering.

Punishment for rehabilitation is intended to transform wrongdoers so that they will no longer be inclined to engage in the wrongful behavior that incurred the punishment. This differs from punishment aimed at deterrence as this aims at providing the target with a reason to not engage in the misdeed in the future. Rehabilitation is also aimed at the agent who did the misdeed, whereas punishment for the sake of deterrence is usually intended to affect others as well.

Obviously, a machine that lacks agency cannot be subject to rehabilitative punishment as it cannot “earn” such punishment by its misdeeds and, presumably, cannot have its behavioral inclinations corrected by such punishment.

To use an obvious example, if your computer crashes and you lose hours of work, punishing the computer to rehabilitate it would be pointless. Not being an agent, it did not “earn” the punishment and punishment will not incline it to crash less in the future.

A machine that possesses agency could “earn” punishment by its misdeeds. It also seems possible to imagine a machine that could be rehabilitated by punishment. For example, one could imagine a robot dog that could be trained in the same way as a real dog. After leaking oil in the house or biting the robo-cat and being scolded, it could learn not to do those misdeeds again.

It could be argued that it would be better, both morally and practically, to build machines that would learn without punishment or to teach them without punishing them. After all, though organic beings seem wired in a way that requires we be trained with pleasure and pain (as Aristotle would argue), there might be no reason that our creations must work the same way. But, perhaps, it is not just a matter of organic, perhaps intelligence and agency require the capacity for pleasure and pain. Or perhaps not. Or it might simply be the only way that we know how to teach. We will be, by our nature, cruel teachers of our machine children.

Then again, we might be inclined to regard a machine that does misdeeds as being defective and in need of repair rather than punishment. If so, such machines would be “refurbished” or reprogrammed rather than rehabilitated by punishment. There are those who think the same of human beings and this raises the same issues about how agents should be treated.

The purpose of deterrence is to motivate the agent who did the misdeed or other agents not to commit that deed. In the case of humans, people argue in favor of capital punishment because of its alleged deterrence value: if the state kills people for certain crimes, people are less likely to commit those crimes.

As with other forms of punishment, deterrence requires agency: the punished target must merit the punishment, and the other targets must be capable of changing their actions in response to that punishment.

Deterrence, obviously enough, does not work in regard to non-agents. For example, if a computer crashes and wipes out a file a person has been laboring on for hours, punishing it will not deter it. Smashing it in front of other computers will not deter them.

A machine that had agency could “earn” such punishment by its misdeeds and could, in theory, be deterred. The punishment could also deter other machines. For example, imagine a combat robot that performed poorly in its mission (or showed robo-cowardice). Punishing it could deter it from doing that again, it could serve as a warning, and thus a deterrence, to other combat robots.

Punishment for the sake of deterrence raises the same sort of issues as punishment aimed at rehabilitation, such as the notion that it might be preferable to repair machines that engage in misdeeds rather than punishing them. The main differences are, of course, that deterrence is not aimed at making the target inclined to behave well, just to disincline it from behaving badly and that deterrence is also aimed at those who have not committed the misdeed.

Philosophers have long speculated about autonomy and agency, but the development of autonomous systems has made such speculation even more important.  Keeping things simple, an autonomous system is capable of operating independent of direct human control. Autonomy comes in degrees of independence and complexity. It is the capacity for independent operation that distinguishes autonomous systems from those controlled externally.

Toys provide useful examples of this distinction. A wind-up mouse toy has some autonomy: once wound up and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy as a puppeteer must control it.

Robots provide examples of more complex autonomous systems. Google’s driverless car is an example of an advanced autonomous machine. Once programmed and deployed, it might be able to drive itself to its destination. A normal car isa non-autonomous system as the driver controls it directly. Some machines allow both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.

Autonomy, at least in this context, is distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is a connection between autonomy and moral agency as moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. For example, a puppet is not accountable for what the puppeteer makes it do. Likewise for remote controlled drones used to assassinate people.

While autonomy is necessary for agency, it is not sufficient. While all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy but has no agency. A modern robot drone following a pre-programed flight-plan has a degree of autonomy but lacks agency. If it collided with a plane, it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or put another way, it lacks freedom.  Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.

One obvious problem with basing agency on freedom (especially metaphysical free will) is that there is endless debate over this subject. There is also the epistemic problem of how one would know if an entity had such freedom and free will seems epistemically indistinguishable from a lack of free will.

As a practical matter, it is often just assumed people have the freedom needed to be agents. Kant famously took this approach. What he saw as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality, so it should be accepted for this reason.

While humans are willing (generally) to attribute freedom and agency to other humans, there are good reasons to not attribute freedom and agency to autonomous machines even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans, they would do what they do because they are what they were made to be. This is in contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.

This distinction between humans and suitably complex machines seems a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency, at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine. An excellent fictional example of this is Harlan Ellison’s Demon With a Glass Hand.

 But it would not be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom).  The German philosopher Leibniz held the view that what each person will do is pre-established by their inner nature. On the face of it, this seems to entail there is no freedom: each person does what they do because of what they are—and they cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept  a commonly held view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.

For Leibniz, being metaphysically without freedom would involve being controlled from the outside, like a puppet controlled by a puppeteer or a vehicle operated by remote control.  In contrast, freedom is acting from one’s values and character. This is what Leibniz and Taoists call “inner nature.” If a person is acting from this inner nature and not external coercion so that the action is the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this view, humans have agency because they have the required degree of freedom and autonomy.

If this model works for humans, it could apply to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.

An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then a machine could also be a moral agent.

From a moral standpoint, I would suggest a Moral Descartes’ Test (or a Moral Turing Test). Descartes argued that the sure proof of having a mind is a capacity to use true language. Turing later proposed a similar test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency: can the machine be as convincing as a human in its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed to prevent prejudice from affecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regard to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A non-human moral agent might differ greatly from a human, and it should not be assumed that an agent must be human, and non-humans cannot be moral agents. The challenge is developing a test for moral agency. It would be interesting if humans could not pass it.

 

A Philosopher’s Blog 2025 brings together a year of sharp, accessible, and often provocative reflections on the moral, political, cultural, and technological challenges of contemporary life. Written by philosopher Michael LaBossiere, these essays move fluidly from the ethics of AI to the culture wars, from conspiracy theories to Dungeons & Dragons, from public policy to personal agency — always with clarity, humor, and a commitment to critical thinking.

Across hundreds of entries, LaBossiere examines the issues shaping our world:

  • AI, technology, and the future of humanity — from mind‑uploading to exoskeletons, deepfakes, and the fate of higher education
  • Politics, power, and public life — including voting rights, inequality, propaganda, and the shifting landscape of American democracy
  • Ethics in everyday life — guns, healthcare, charity, masculinity, inheritance, and the moral puzzles hidden in ordinary choices
  • Culture, identity, and conflict — racism, gender, religion, free speech, and the strange logic of modern outrage
  • Philosophy in unexpected places — video games, D&D, superheroes, time travel, and the metaphysics of fictional worlds

Whether he is dissecting the rhetoric of conspiracy theories, exploring the ethics of space mining, or reflecting on the death of a beloved dog, LaBossiere invites readers into a conversation that is rigorous without being rigid, principled without being preachy, and always grounded in the belief that philosophy is for everyone.

This collection is for readers who want more than hot takes — who want to understand how arguments work, why beliefs matter, and how to think more clearly in a world that rewards confusion.

Thoughtful, wide‑ranging, and often darkly funny, A Philosopher’s Blog 2025 is a companion for anyone trying to make sense of the twenty‑first century.

 

Available for $2.99 on Amazon

 

 

 

While some countries will pay ransoms to free hostages, the United States has a public policy of not doing this. One reason to not pay a ransom for hostages is based on sticking to a principle. This principle could be that bad behavior should not be rewarded or that hostage taking should be punished. Or some other principle.

One of the best arguments against paying ransoms for hostages is a practical and utilitarian moral argument. Paying ransoms gives hostage takers an incentive to take hostages. This incentive means more people will be taken hostage. The cost of not paying is, of course, the possibility that hostages will be killed. However, the argument goes, if hostage takers realize that they will not be paid a ransom, they will have less incentive to take more hostages. This will reduce the chances that hostages will be taken. The calculation is, of course, that the harm done to the current hostages will be outweighed by the benefits of not having people taken hostage in the future.

This argument assumes that hostage takers are primarily motivated by the ransom. If they are taking hostages primarily for other reasons, such as for status, to make a statement or to get media attention, then not paying them a ransom will not significantly reduce their incentive to take hostages. This leads to a second reason why ransoms should not be paid.

In addition to the incentive argument, there is also the funding argument. While a terrorist group might have reasons other than money to take hostages, they benefit from getting ransoms. The money they receive can be used to fund additional operations, such as taking more hostages. Obviously, if ransoms are not paid, then such groups lose this funding, and this could impact their operations. Since paying a ransom would be funding terrorism, this provides both a moral and a practical reason not to pay.

While these arguments have a rational appeal, they are typically countered by emotional appeals. One approach to arguing that ransoms should be paid is the “in their shoes” appeal. The method involves asking a person whether they would want a ransom to be paid for their release or for the release of a loved one. Most people would want the ransom paid, assuming doing so would be effective. Sometimes the appeal is made explicitly in terms of emotions: “how would you feel if your loved one died because the government refused to pay the ransom?” Obviously, a person would feel awful.

This method does have considerable appeal. The “in their shoes” appeal can is like the golden rule approach (do unto others as you would have them do unto you) and is that policy should be based on how you would want to be treated in that situation. If I would not want the policy applied to me (that is, I would want to be ransomed or have my loved one ransomed), then I should be morally opposed to a no-pay policy as a matter of consistency. This certainly makes sense: if I would not want a policy applied in my case, then I should (in general) not support that policy.

One obvious counter is that there seems to be a distinction between what a policy should be and whether a person would want that policy applied to herself. For example, some universities have a policy that if a student misses more than three classes, the student fails the course. Naturally, no student wants that policy to be applied to her (and most professors would not have wanted it to apply to them when they were students), but this does not show that the policy is wrong. As another example, a company might have a policy of not providing health insurance to part time employees. While the CEO would certainly not like the policy if she were part-time, it does not follow that the policy must be a bad one. As such, policies need to be assessed not just in terms of how a person feels about them, but in terms of their merit or lack thereof.

Another obvious counter is to use the same approach, only with a modification. In response to the question “how would you feel if you were the hostage or she were a loved one?” one could ask “how would you feel if you or a loved one were taken hostage in an operation funded by ransom money? Or “how would you feel if you or a loved one were taken hostage because the hostage takers learned that people would pay ransoms for hostages?” The answer would be, of course, that one would feel bad about that. However, while how one would feel about this can be useful in discussing the matter, it is not decisive. Settling the matter rationally does require considering more than just how people would feel. It requires looking at the matter with a degree of objectivity. That is, not just asking how people would feel, but what would be right and what would yield the best results in the practical sense.  Obviously, talking about objectivity is easy when one is not a hostage.

In my previous essay I set the stage for discussing the concern about people switching competition categories to gain something. It is to this matter that I now turn.

The Sickle Cell 5K in Tallahassee is known for its excellent master’s trophy for the overall male and female masters runners. It has consistently been bigger and better than the second and third overall awards. One year a master’s runner was third overall but wanted the male masters’ award instead. This created a problem. While there was no rule about this, there are established running norms: overall places take precedence over the masters category and the masters category takes precedence over age group placing.  So, a 40+ year old runner who placed first to third would get the corresponding overall award. The first 40+ runner outside the top three would get the masters award and the next runner in their age group would win that age group. As would be expected, some people got mad about this runner’s efforts to get the masters award since he was breaking the norms and traditions to get a better award.

His argument, which was not unreasonable, was that he was the first masters runner and hence earned that award. This meant that the 4th place runner would get third overall. This might sound odd, but (as noted above) the running norms already allow for a person who finishes second in their age group to place first if the person who would win that age group wins an overall or masters award (most races have a no-double-dip rule). While his request did break the norms, he was  in the masters category. One might say that he elected to identify as a masters runner for the purpose of the award. He got the award when the original masters winner did everyone a favor by giving it to him, allowing the awards to continue. But this episode is still spoken of today and switching categories to get a better award is usually seen as questionable. This episode can be used as an analogy.

Suppose that transgender athletes are like the masters athlete: they belong in their chosen category but they are changing from one category to another in order to get a better award (or win). The masters runner could have accepted the third-place award, a transgender runner who identifies as female could stick to competing as male. But by switching categories, the athletes could be seen as gaining an advantage and thus they have an incentive to do so. They also are both picking a category they really belong in, so they are not engaged in a cheat or deceit. But if their motive is to switch for a gain and and in doing so they do harm another athlete, then this would seem to be wrong. The masters runner took the better award from another runner and a transgender athlete who changes categories to win takes away a win from another female athlete. This can be used to ground a moral argument against allowing athletes to change categories to win. That said, there is an easy counter.

Imagine a runner attends a Division 1 school and finds that they are good enough for the division but not good enough to regularly win. They switch to a Division 2 school so they can win regularly. They have changed their category to improve their gains and have “harmed” other runners. They might displace a runner from the team and will take victories that would have gone to other athletes had they not changed their category. While this approach to sports might not seem morally ideal, the runner would not be acting wrongly. They would be Division 2 even if they could have stuck with Division 1. Likewise, for an athlete who switches their gender category by transitioning: one might take issue with someone doing this for an advantage, but this is morally acceptable. It must be noted that people do not transition just to get an advantage. Some readers probably doubt that an athlete can legitimately switch categories, so I now turn to this matter.

Let us go back to the masters award incident but change it slightly. Imagine that the third-place runner is 39 years and 10 months old but decides to identify as a masters runner to get the award. In this case, the issue is easily resolved: age is an objective matter, and they are not a masters runner. Hence, they do not get the award. Likewise, athletes who claim to be female but are not have no right to switch categories. While this might seem to settle the matter, there are at least two replies.

One reply is to go back to the masters case. Imagine that the runner is 39 years old based on his birthday, but he is a devote Catholic who sincerely believes that life begins at conception and sets his age accordingly at over 40. By his religious based standard of age, he is a masters runner. While the official age of runner for racing is based on their birthdate and not their moment of conception, the runner could make an argument based on freedom of religion: he is being discriminated against by the failure of the race officials to recognize that he is at least 40 because his life began at conception under his faith. Likewise, a runner who self-identifies as a female could argue that she is being discriminated against when she is not allowed to select her category based on her beliefs about what it is to be female. Both runners could agree that there is a fact of the matter about being a masters runner or a female runner, but they disagree with the standards being imposed upon them by those who they see as discriminating against them. As such, the debate becomes one of defining category membership.

In the case of age, the dispute would seem to be easy to settle: to avoid charges of attacking religious freedom, the rules about age could be put neutrally to specify that the time from birth is used to determine the competition age of a runner. The standard applies to everyone and intuitively seems fair. In the case of gender, the same approach should be taken: a fair set of standards to categorize people is needed. But gender is much more complicated than age.

If gender were only of concern in sports, then the matter would be easier to address. But gender impacts every aspect of a person’s life and is, of course, a key battleground in the culture wars. As such, even if one makes a good faith effort to develop gender standards for sports categories, this will be a daunting task.  Obviously,  many people think they have the right answer and think they could easily solve the problem by imposing their own views on everyone else.

There are, of course, some easy and obvious sufficient conditions for being admitted into the female category: people with XX chromosomes and female anatomy and physiology get an automatic admission (if they wish).  Beyond that, the debates begin.  Since this matter is complicated and not my area of expertise, I freely admit that I do not have a set of necessary and sufficient conditions. I do not even have a well-considered set of general principles.

One obvious principle is that it would be morally wrong for a male athlete to lie about his identify to gain a competitive advantage. The moral problem is, of course, the intent to deceive to gain an advantage.

This is analogous to my view that it is wrong for person to lie about their religious views to gain something, such as a person who wants to use a religious excuse to get away with discrimination or to avoid paying taxes. My moral assessment would, of course, adjust in cases of sincere belief, even if the person’s belief turns out to be untrue. As with the religion case, there is the practical problem of sorting out when people are lying, though in the United States we generally do not put professed religious beliefs to a test.

While there is no crisis in sports involving male athletes switching categories in large numbers, allowing people to switch categories merely by saying they identify in that category does provide an opportunity for the unprincipled to exploit, just as allowing people to claim special treatment simply for asserting they have religious beliefs allows opportunities for the unprincipled. The moral and practical challenge is sorting out what tests should be used to protect against such unprincipled exploitation while avoiding discriminating against people. We do not make people prove that their religious beliefs are true before allowing them to gain the benefits of professing belief and we need to be consistent when it comes to professed gender identity. One approach, which is what we generally do for religion, is to take people at their word unless there is adequate evidence of an intent to deceive. For example, a male athlete who posted “LOL identifying as a girl just to win the 5K today, but fellas stay away I ain’t gay! After I win, I will be a boy again.” would be intending to deceive and should, one would infer, not be allowed to compete in the 5K as a female.  Likewise, if someone bringing a freedom of religion lawsuit so they can discriminate posted “LOL pretending to believe in God so I can hate on the gays!”, then they should probably not win that lawsuit. But in other cases, we should accept their profession as sincere. I do admit this does not settle the matter.

Upon taking office, Joe Biden signed an executive order requiring that schools receiving federal funding allow people who self-identify as females onto female sport’s teams. Pushback against it has ranged from thoughtful considerations of fairness to misogyny masquerading as morality. Exploiting the manufactured panic over transgender people, Trump signed an executive order banning transgender people from competing in women’s sports. While the narrative is that the anti-trans athlete folks are motivated by fairness, this is easily disproved by their lack of concern about fair treatment of women in sports in other areas, such as funding and facilities.

In addition to being complicated on its own, the fairness of transgender women competing with other women is linked to other complicated matters, such as general concerns about fairness in society and issues of gender identity. People arguing in good faith can make arguments in one area without realizing the implications of these arguments in other areas. To illustrate, consider the fictional character of Polly. Polly is a national level high school runner who holds to a principle of fairness. Polly’s brother, Paul, is faster than Polly but not a national level male runner. He jokingly suggests putting on a dress and beating Polly, which worries her. If a person could just self-identify as a female, Paul could do so and suddenly be a national level female high school runner. In a panic, Polly thinks up a nightmare scenario: the top male runners compete as boys, switch their gender identities, and win again as girls! Polly and her sister runners would be out of the competition, which would be unfair. In good faith, Polly can make a good moral argument against allowing this based on fairness. But her seemingly reasonable argument might justifying harming people in the broader context of fairness in society, something Polly would not want. As such, we should be careful to consider the implications of arguments about fairness in sports have in other areas.

People can also argue in bad faith, presenting an appealing fairness argument about sports while not caring about fairness. They might be using the sport argument as a Trojan horse to lure people into their ideological agenda or they might want to weaponize a seemingly reasonable argument. This is not to say that arguing in bad faith entails that a person must be making false claims or fallacious arguments. After all, one can use truth and good logic in bad faith. But we should be on guard against bad faith arguments. I will endeavor to follow my own advice and make good faith arguments while considering their implications.

From the standpoint of fairness, there are reasonable moral grounds to be concerned about allowing people to self-identify their category for competition. To focus the discussion, I will use my own sport of running and the specific context of road races—but the general points apply across all sports.

Road races have well-established categories that are based on a conception of fair competition. Almost all races have gender categories (male or female). Most races have age groups and some also include the master category (40+) and sometimes the grand master category (50+). A few races also add a weight category (Clydesdale or Athena). In addition to categories created for fairness, races sometimes have categories for other reasons. For example, the Bowlegs 5K in Tallahassee raises money for a college scholarship and has a special educator category. Since educators as a class have neither advantages nor disadvantages relative to other runners, this category is not based on fairness.

In most cases, these categories serve their intended purpose as they make competition fairer by sorting people into groups based on qualities that impact performance. In some cases, these categories can have the unintended effect of allowing a person in a generally advantaged category win in their category while losing to a runner in a disadvantaged category. For example, a 50-year-old runner might win nothing in his age group while also beating every runner in the younger age groups. He thus loses to inferior performances because of the age groups intended to allow older runners like him to fairly compete. While this can be annoying, these cases are rare, and the overall positive impact of age groups and gender divisions outweigh the negative aspects. This is a good general approach to setting policies. A good policy will never be perfect, but a good policy creates more overall good than bad. But there are people who do try to exploit categories to their advantage. I will turn to this in my next essay.

Three Confederate veterans, who fought against the United States of America, were nominated for admission to Florida’s Veterans’ Hall of Fame. The purpose of the hall is to honor “those military veterans who, through their works and lives during or after military service, have made a significant contribution to the State of Florida.”

The three nominees were David Lang, Samuel Pasco and Edward A. Perry. Perry was Florida’s governor from 1885 to 1889; Pasco was a U.S. senator. Lang assisted in creating what is now the Florida National Guard. They did make significant contributions to Florida. The main legal question was whether they qualify as veterans. Since Florida was in rebellion (in defense of slavery) against the United States there is also a moral question of whether or they should be considered United States veterans.

The state of Florida and the US federal government have similar definitions of “veteran.” For Florida, a veteran is a person who served in the active military and received an honorable discharge. The federal definition states that “The term ‘veteran’ means a person who served in the active military, naval, or air service, and who was discharged or released therefrom under conditions other than dishonorable.” The law also defines “Armed Forces” as the “United States Army, Navy, Marine Corps, Air Force and Coast Guard.” The reserves are also included as being in the armed forces.

According to Mike Prendergast, the executive director of the Department of Veterans Affairs, the three nominees in question did not qualify because the applications did not indicate that the men served in the armed forces of the United States of America. Interestingly, Agricultural Commissioner Adam Putnam took the view that “If you’re throwing these guys out on a technicality, that’s just dumb.”

Presumably, Putnam saw the fact that the men served in the Confederate army and took up arms against the United States as a technicality. This strikes me as more than a mere technicality. After all, the honor seems reserved for veterans as defined by the relevant laws. As such, being Confederate veterans would seem to no more qualify the men than being a veteran of the German or Japanese army in WWII would qualify someone who moved to Florida and ended up doing great things for the state. There is also the moral argument about enrolling people who fought against the United States. Fighting in defense of slavery and against the lawful government of the United States would seem to be morally problematic in regard to the veteran part of the honor.

One counter to the legal argument is that Confederate soldiers were granted (mostly symbolic) pensions about 100 years after the end of the Civil War. Confederate veterans can also be buried in a special Confederate section of Arlington National Cemetery. These facts do open the door to a legal and moral argument. In regard to the legal argument, it could be contended that Confederate veterans have been treated, in some other ways, as United States veterans. As such, one might argue, this should be extended to the Veterans’ Hall of Fame.

The obvious response is that these concessions to the Confederate veterans do not suffice to classify Confederate veterans as veterans of the United States. As such, they would not be qualified. There is also the moral counter that soldiers who fought against the United States should not be honored as veterans of the United States. After all, one would not honor veterans of other militaries that have fought against the United States even if they ended up doing great things for Florida.

It could also be argued that since the states that made up the Confederacy re-joined the United States, the veterans of the Confederacy would, as citizens, become United States’ veterans. Of course, the same logic would seem to apply to parts of the United States that were assimilated from other nations, such as Mexico, the lands of the Iroquois, and the lands of Apache and so on. As such, Sitting Bull would qualify as a veteran under this reasoning. Perhaps this could be countered by contending that the south left and then rejoined, so it is not becoming part of the United States that has the desired effect but rejoining after a rebellion.

Another possible argument is to contend that the Veterans’ Hall of Fame is a Florida hall and, as such, just requires that the veterans were Florida veterans. In the Civil War units were, in general, connected to a specific state (such the 1st Maine). As such, if the men in question served in a Florida unit that fought against the United States, they would be Florida veterans but not United States veterans. Using this option would, of course, require that the requirements not include that a nominee be a veteran of the United States military and presumably it could not be connected to the United States VA since that agency is only responsible for veterans of the United States armed forces and not veterans who served other nations.

In regard to the moral concerns of honoring, as veterans, men who fought against the United States and in defense of slavery, it could be claimed that the war was not about slavery. The obvious problem with this is that the war was, in fact, fought to preserve slavery. The southern states made this abundantly clear. Alexander Stephens, vice president of the Confederacy, gave his infamous Cornerstone Speech and made this quite clear when he said “Our new Government is founded upon exactly the opposite ideas; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition.”

It could, of course, be argued that not every soldier fighting for the South was fighting to defend slavery. After all, just like today, most people fighting in wars are not the people who set policy or benefit from these policies. These men could have gone to war not to protect the institution of slavery, but because they were duped by the slave holders. Or because they wanted to defend their state from “northern aggression.” Or some other morally acceptable reason. That is, it could be claimed that these men were fighting for something other than the explicit purpose of the Confederacy, namely the preservation of slavery. Since this is not impossible, it could be claimed that the men should be given the benefit of the doubt and be honored for fighting against the United States and then doing significant things for Florida.

Given how the Trump regime is re-embracing the Confederacy, it would not be surprising if this matter was re-considered in Florida. It would serve as a distraction from whatever the administration is up to and would please the white supremacists  and lovers of the Confederacy in the base.