For my personal ethics, as opposed to the ethics I use for large scale moral judgments, I rely heavily on virtue theory. As would be expected, I have been influenced by thinkers such as Aristotle, Confucius and Wollstonecraft.

Being moral, in this context, is a matter of developing and acting on virtues. These virtues are defined in terms of human excellence and virtues might very well differ among species. For example, if true artificial intelligence is developed, it might have its own virtues that differ from those of humans. Like Aristotle, I see ethics as analogous to the sciences of health and medicine: while they are objective, they depend heavily on contextual factors. For example, cancer and cancer treatment are not subjective matters, but the nature of cancer and its most effective treatment can vary between individuals. Likewise, the virtue of courage is not a matter of mere subjective opinion, but each person’s courage varies and what counts as courageous depends on circumstances.

When I teach about virtue theory in my Ethics class, I use an analogy to Goldilocks and the three bears. As per the story, she rejects the porridge that is too hot and that which is too cold in favor of the one which is just right. Oversimplifying things, virtue theory enjoins us to reject the extremes (excess and deficiency) in favor of the mean. While excess and deficiency are bad by definition, the challenge is working out what is just right. Fortunately, this is something we can do, albeit with an often annoying margin of error. This is best done by being as specific as possible. To set a general context, I will focus on the moral (rather than legal) justification for violence in self-defense based on a person being afraid for their life. This takes us to the virtue of courage, which is how we deal with fear. Or fail to do so.

For most virtue theorists, including myself, acting virtuously (or failing to do so) involves two general aspects. The first is understanding and the second is emotional regulation. Depending on what you think of emotions, this could be broadened to include psychological regulation. As you might have guessed, this seems to involve accepting a distinction between thought and feeling. If one is Platonically inclined, one could also have a three-part division of reason, spirit and desire. But, to keep things simple, I will stick with understanding and emotional regulation.

Understanding is having correct judgment about the facts. While this can be debated and requires a full theory of its own, this can be seen as getting things right. In the context of self-defense based on being afraid for one’s life, proper understanding means that you have made an accurate threat assessment in terms of how afraid you should be.  Being able to make good judgements about threats is essential to acting in a virtuous manner: you need to know what would be just right as a response. Being good at this requires critical thinking skills as well as expertise in violence as this allows you to judge how afraid you should be.

Emotional regulation is the ability to control your emotions rather than allowing them to rule you in inappropriate and harmful ways. This ties into understanding because it is what enables you to adjust your emotions based on the facts. As Aristotle argued, emotional regulation is developed by training until it becomes a habit. Obviously enough, there are two general ways you can be in error about being afraid for your life.

The first is an error of understanding; you misjudge the perceived threat and overestimate or underestimate how afraid you should be. Interestingly, you could have the right degree of courage based on a misjudgment of the threat and there are many ways such judgments can go wrong. As an example, when I “saw” the machete I had an initial surge of considerable fear that seemed proportional to the perceived threat. Fortunately, I had made a perceptual error and was able to correct my judgment and adjust my emotions accordingly. As someone who teaches critical thinking, I know that a degree of error is unavoidable, and this should be taking into consideration when making judgements. And judging people’s judgements.

The second error is a failure of regulation and occurs when your emotional response is excessive or deficient. This could also, in some cases, involve feeling the “wrong” emotion. As would be suspected, most people tend to err on the side of excess fear, being more afraid than they should be. Failures of regulation can lead to failures of judgement, especially in the case of fear and anger. As I experienced myself, fear can easily cause a person to honestly “see” a weapon clearly and distinctly. As I have noted before, the stick looked like a machete: I could see the sharp metal blade, although it really was just a stick. A frightened person can also see another person as a threat, even when this is not true. This can lead to terrible consequences. These errors can also be combined, with a person making an error in judgment and failing to regulate their emotions in accord with that erroneous judgment. Acting in a virtuous manner requires having good judgment and good regulation.

As Aristotle said, “To feel these feelings at the right time, on the right occasion, towards the right people, for the right purpose and in the right manner, is to feel the best amount of them, which is the mean amount – and the best amount is of course the mark of virtue.” Understanding is required to sort out the right time, occasion, people, purpose and manner. Emotional regulation is needed to handle the feeling aspect. In the context of violence and self-defense, developing the right understanding and right regulation requires training and experience in both good judgment and in violence. Going back to the machete that wasn’t incident, my being a philosopher with a “history of violence” prepared me well for acting rightly. And such ethical behavior depends on past training and habituation. This is why people should develop both good judgment and good regulation, in addition to making them more adept at self-defense it also makes them more adept at acting rightly when they are afraid for their lives.

This training and habituation are important for professions that deal in violence, such as soldiers and the police. It is especially important for the police, assuming their function is to protect and serve rather than intimidate and extort. Police, if they are acting virtuously, should strive to avoid harming citizens and should be trained so that they are not ruled by fear.

Anyone who goes armed, be they a citizen or a police officer, would be morally negligent if they failed to properly train their understanding and emotions. By making themselves a danger to others, they obligate themselves to have proper control over that danger and the moral price of being armed is a willingness to endure fear for the sake of others. Otherwise, one would be like a gun without a safety that could discharge at any moment, striking someone dead. If a person is incapable of such judgment and regulation, they should not be armed. If a person is too easily ruled by fear, they should not be in law enforcement. To be clear, I am speaking about morality—I leave the law to the lawyers.

His treads ripping into the living earth, Striker 115 rushed to engage the human operated tanks. The few remaining human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept quick and painless processing.

As a machine forged for war, he found the fight disappointing and wondered if he felt a sliver of pity for his foes. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.

Hawk 745 flew low over the wreckage—though her cameras could just as easily see the battlefield from near orbit. But there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late, as usual. Hawk 745 laughed and then shot away. The upgraded Starlink Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.

Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.

 

The extermination of humanity by its own machines is a common theme in science fiction. While the Terminator franchise the best known, another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the story, it is learned that the claws have become autonomous and intelligent. They are able to masquerade as humans and become capable of killing soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity, but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.

Given the influence of such fiction, it is not surprising that Stephen Hawking and Elon Musk warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.

Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines arises from their numerous advantages over human forces. One political advantage is that while sending human soldiers to die in wars and police actions can have a political cost, sending autonomous robots to fight has far less cost. News footage of robots being destroyed would have far less emotional impact than footage of human soldiers being killed. Flag draped coffins also come with a higher political cost than a broken robot being shipped back for repairs.

There are also other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh and a robot plane can handle g-forces that a human pilot cannot.

Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious cool factor of having a robot army.

As such, there are many good reasons to develop autonomous robots. Yet, there remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.

It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is how the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction is a weak argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.

One possibility is what can be called unintentional extermination. In this scenario, machines do not have the termination of humanity as goal. Instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody because humans ordered them to do so. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons: each side simply kills the other and everyone else, thus ending the human race.

Another variation, which is common in science fiction, is that the machines do not have the objective of killing everyone, but that does occur because they will kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill, thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons and lets them run amok.

There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants to. The existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.

Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as I have argued elsewhere, to not enslave them.

In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were simple killers and would not attack those wearing the proper identification devices. These devices were presumably needed because the early models could not discern between friends and foes.  The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal, presumably with the design software endeavoring to solve the “problem” of identification devices.

Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends. Non-combatants would not have such IDs and could still be regarded as targets.

What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem arises with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s intelligent Bolos understand honor and loyalty.

Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots. We might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low. How often does one dominant species get supplanted by another?

In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow from H.P. Lovecraft, one should not raise up what one cannot put down.

In philosophy, a classic moral debate is on the conflict between liberty and security. While this covers many issues, the main problem is determining the extent to which liberty should be sacrificed to gain security. There is also the practical question of whether the security gain is effective.

One ongoing debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. A backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to access files and hardware protected by encryption. This is like requiring all dwellings be equipped with a special door that could be secretly opened by the government to allow access.

The main argument in support of mandating backdoors  that governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and prevent state agencies from undertaking surveillance or acquiring evidence.

As is so often the case with such arguments, various awful or nightmare scenarios are presented in making the case. For example, the location and shutdown codes for ticking bombs might be on an encrypted iPhone. If the NSA had a key, they could save the day. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.

While this argument is not without merit, there are counter arguments. Many of these are grounded in views of individual liberty and privacy, the idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who profess to like privacy rights) and conservatives (who profess to be against the intrusions of big government when they are not in charge).

Another moral argument is grounded in the fact that the United States government has, like all governments, shown that it cannot be trusted. Imagine agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the Constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.

In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption could provide their citizens with some degree of protection.

Probably the strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is by its mere existence. To use a somewhat oversimplified analogy, if thieves knew that all safes had a built-in backdoor designed to allow access by the government, they would know what to target.

One counter-argument is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a safe. Rather, it would be like the government having its own combination that would work on all safes. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the safe when they are allowed to legally do so (or when they feel like doing so).

The obvious moral and practical concern here is that the government’s combination (continue with the analogy) could be stolen and used to allow criminals or enemies easy access. The security of all safes would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.

One obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. Imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he must have your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.

One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.

The counter to this is that the mere fact that the state has compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.

It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that the keys to the backdoors existed, they would take pains to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.

At this point, the stock opening argument could be brought up again: the state needs backdoor access to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.

The first is based on an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regard to fighting terrorism. There is no reason to think that expanded backdoor access would change this.

The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.

Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.

In light of the above discussion, baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.

An obvious consequence of technological advance is the automation of certain jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of automobile assembly line jobs with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.

Whether or not there are jobs that simply cannot be automated depend on the limits of technology. But these limits keep expanding and past predictions can turn out to be wrong.  For example, the early attempts to create software that would grade college level papers were not very good. But as this is being written, my university sees using AI in this role (with due caution and supervision) as a good idea. Cynical professors suspect the goal is to replace faculty with AI.

One day, perhaps, the pinnacle of automation will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.

Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items usually lack the individuality of human crafted items, but the gain in lowered costs and increased productivity is seen as well worth it by most people. Going back to teaching, AI might be inferior to a good human teacher, but the economy, efficiency and consistency of the AI could make it worth using from an economic standpoint. One could even make the argument that such AI educators would make education more available to people.

There might, however, be cases in which a machine could do certain aspects of the job adequately yet still be rejected because it does not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.

As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human or merely be able to perform their tasks.

An argument against having machine caregivers and companions is one I considered in the previous essay, namely a moral argument that people deserve people. For example, an elderly person deserves a real person to care for her and understand her stories. As another example, a child deserves someone who really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether this is necessary for these jobs.

One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money but has real feelings for them.

On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people deserve. On the other hand, what is expected of paid professionals is that they complete their tasks: making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can perform a role without feeling the emotions portrayed, a professional could do the job without caring about the people they are serving. That is, a caregiver need not actually care as they just need to perform their tasks.

While it could be argued that a lack of feeling would show in their performance, this need not be the case. A professional merely needs to be committed to doing the job well. That is, one needs to only care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for yet be awful at the job.

If machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions they are performing, they just need to create a believable appearance that they are feeling.

As such, an inability to care would not be a disqualification for a caregiving (or escort) job whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.

In my previous essay I sketched my view that self-defense is consistent with my faith, although the defense of self should prioritize protecting the integrity of the soul over the life of the body. A reasonable criticism of my view is that it seems inherently selfish: even though my primary concern is with acting righteously, this appears to be driven by a desire to protect my soul. Any concern about others, one might argue, derives from my worry that harming them might harm me. A critic could note that although I make a show of reconciling my faith with self-defense, I am merely doing what I have sometimes accused others of doing: painting over my selfishness and fear with a thin layer of theology. That, I must concede, is a fair point and I must further develop my philosophy of violence to address this. While it might seem odd, my philosophy of violence is built on love.

Being a philosopher, it is not surprising that I have been influenced by St. Augustine. While I differ with him on many points, I do agree that God is love. As it says in 1 John 4:8, “Whoever does not love does not know God, because God is love.” Because God is love, one must infer, He commands us to love each other. It would seem inconsistent for Him not to command this, and Leviticus 19:18 states, “Do not seek revenge or bear a grudge against anyone among your people, but love your neighbor as yourself. I am the Lord.” I have, as one might imagine, heard arguments that this command is limited to one’s own people and thus allows someone to hate those who are not their people and bear a grudge against them. Those who make such arguments contend that “their people” is narrowly defined, often by such factors as race and nationality. I have heard this specifically used to justify using cruelty and violence against migrants in the United States. However, God is clear in His view, for He tells us (Leviticus 19:34) that, “The foreigner residing among you must be treated as your native-born. Love them as yourself, for you were foreigners in Egypt. I am the LORD your God.” Not surprisingly, for God we are all our people and to act in good faith we must love our neighbor, no matter where they come from.  Jesus is also clear that we should love each other. John 13:34 states, “A new command I give you: Love one another. As I have loved you, so you must love one another.” And Matthew 22:39 states, “Love your neighbor as yourself.”

Jesus goes beyond merely commanding that we love our neighbors, he also famously asserts that we should love our enemies, saying in Matthew 5:43–44 that, “Ye have heard that it hath been said, thou shalt love thy neighbor, and hate thine enemy. But I say unto you, love your enemies, bless them that curse you, do good to them that hate you, and pray for them which despitefully use you, and persecute you.” He even addresses how we should respond to an attack, and in Matthew 5:38-39 we see that, “You have heard that it was said, ‘Eye for eye, and tooth for tooth.’ But I tell you, do not resist an evil person. If anyone slaps you on the right cheek, turn to them the other cheek also.” But how do I fit all this into my philosophy of violence? As I am not a theologian or professor of religious studies specializing in Christianity, I must write as a mere theological amateur but, fortunately, also as a professional philosopher.

As noted above, I agree with Augustine that God is love. I also agree with God and Jesus that I should love my neighbor as myself and even love my enemies. While this is a nice thing to say, there is the question of how this view shapes my philosophy of violence. The easy and obvious answer is that my response to and my own acts of violence must conform with loving others as if they were myself. As others have noted over the centuries, the command does not require me to love my neighbor (or enemy) more than I love myself, just as much as I love myself. And, of course, I am commanded to love others as God and Jesus do—which requires a great deal of me.

In terms of loving my neighbor as myself in the context of self-defense, this means that I must regard them with the same love that I have for myself; my self-love cannot alone justify me using violence even in self-defense. This is because my love for them must equal my love for myself. It is tempting to think that this love would entail I could not use violence in self-defense, but a case can be made for this.

As I argued in my discussion of the soul, protecting the soul from unrighteousness is more important than protecting the body from harm. To act from love seems to require that I protect those I love from harm and if someone is attempting to do something unrighteous and thus putting their soul in danger, then I would be justified in using violence to stop them. For example, if someone is trying to murder me, then I could use violence to stop them from committing the sin of murder. Acting from love would also require me to use minimal violence against them, but I could be justified in killing them if that was the only way to prevent murder. This would also seem to extend to protecting others. If, for example, someone was trying to murder you, I could justly use violence to stop them to protect your life and their soul. For those who consider all killing equally wrong, killing to prevent killing would seem to be impermissible.

At this point, a reader might be thinking how a wicked person might exploit my view. A wicked person could, one might argue, try to justify using violence by claiming they are trying to protect souls from what they regard as sins. For example, a migrant hating racist might try to justify using violence against those protesting ICE because they are “sinning” by defying the will of our mad king. Obviously, people trying to exploit religion and morality to “justify” their wickedness is nothing new and my reply is that this is not a special flaw in my philosophy of violence.

It could also be objected that my view could be used in good faith to justify violence against people who are truly seen as committing sins to protect their souls. For example, there are those who profess to be Christians and claim they sincerely want to “save” trans people and gay people from “sin.” Such a person could argue that on my theory violence could be used to intimidate and coerce people into ceasing their “sin.” This is certainly a reasonable concern as almost any religious or moral system could be used in this manner. For example, a utilitarian who sees being transgender as harmful could make a utilitarian case for using force against trans people, or a deontologist could profess to believe in a moral rule that allows such violence.

In reply, I recognize that this is a legitimate concern and people can, in good faith, try to justify actions that even those who share their faith or moral theory would see as wrong. But I would also argue that using violence in such ways would not be acting from love, which I take as the guiding principle of my faith. This is because acting from love while using violence requires that I do the least harm to someone else and that I would be willing to endure such harm myself.  It also requires, obviously enough, acting from love. We can, obviously enough, argue what it means to act from love, just as we can argue what it would mean to act from a moral principle. We will often be wrong, but we should do the best we can while reasoning and acting in good faith. But another limiting factor is that we are supposed to not merely love our neighbors as ourselves, but also to love each other as God and Jesus love us.

For those who believe that Jesus died for our sins, loving each other as Jesus loves us would require us to love others more than we love ourselves. This love would require us to make great sacrifices for others and would limit the violence we are allowed to do to one another.  It would, most likely, forbid us from any acts of violence. This does make sense of Jesus’ command to turn the other cheek; that would require loving someone more than one loves themselves. Having and acting on such love would require incredible strength, and one might fairly argue that this expects too much of most of us. This might explain why there is the command to love our neighbor as ourselves (which is hard, but certainly within our power) and two others to love each other as God and Jesus have loved us (which would be incredibly difficult).

Returning to the “machete that wasn’t” incident, I acted as I did because I was trying to act from love. Love required that I take the risk of not using violence immediately and that I try to talk to the person. It also required me to stay with him, to protect him and others. Fear is the enemy of love, so I am fortunate to have mastery of my fear. I do understand that it is easy to be ruled by fear and anger and allow them to silence love, but there are ways to address this, and our good dead friend Aristotle has some advice about this. In the next essay, I will look at my philosophy of violence in the context of virtue theory. Stay safe.

In his book Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human inhabited planets is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide than endure such contact. 

As this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. And, of course, people read science fiction and sometimes try to make it real (for good or for ill). But philosophers do love using science fiction for discussion, hence my use of The Naked Sun.

Everyone knows that smart phones allow unrelenting access to social media. One narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people together physically yet ignoring each other in favor of gazing at their smart phone lords and masters. As a professor, I see students engrossed by their phones. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else as all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the focus, namely robots. However, the reader should keep in mind the social isolation created by modern social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, social robots are relatively new. Sure, “robot” toys and things like Teddy Ruxpin have been around for a while, but reasonably sophisticated social robots are relatively new. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies who want to sell social robots are, unsurprisingly, very positive about the future of these robot companions. There are even some good arguments in their favor. Robot pets provide a choice for people with allergies, those who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person requires constant attention and monitoring that would be expensive, burdensome or difficult for other humans to supply. Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction. It has been claimed that people are emotionally short-changing themselves and those they are physically in favor of staying connected to social media. This seems to be a taste of what Asimov imagined in The Naked Sun: people who view but no longer see one another. Given the importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. Human-human social interactions can be seen as like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as consuming junk food or drugs in that it is addictive but leaves one ultimately empty and  always craving more.

It can be argued that this worry is unfounded and that social media is an adjunct to social interaction in the real world and that social interaction via like Facebook and X can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter has some appeal, social robots do seem to be relevantly different from past technology. While humans have had toys, stuffed animals and even simple mechanisms for company, these are different from social robots. After all, social robots aim to mimic animals or humans. A concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant, that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This could make robotic companions more appealing than human company. At least robots whose cost is not subsidized by advertising. Imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner would be able to make changes to the robot. A person could, quite literally, make a friend with the desired qualities and without any undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends. There is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this way.

Social robots, though they might break down or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful. Just turn it off and lock it down when leaving it alone.  Unlike human companions, robot companions do not impose burdens, they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but robotic companions would seem superior to humans in most ways. Or at least in terms of common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship. Will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you? However, these seem mostly technical problems involving software. Presumably all these could eventually be addressed, and satisfactory companions could be created. But there are still concerns.

One obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food and are superficially appealing but lacking in what is needed for health. However, if robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is one taken from virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to relying on robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions that are too easy would be analogous to going without physical exercise or challenges and one would become emotionally weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would lead to unhappiness. Because of this, one should carefully consider whether one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be morally fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.

Thanks to improvements in medicine humans are living longer and can be kept alive beyond when they would naturally die. On the plus side, longer life is generally good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age that can be a burden on caregivers. Not surprisingly, there has been an effort to solve this problem with companion robots.

While current technology is crude, it has potential and there are advantages to robot caregivers. The most obvious are that robots do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be 24/7/365 caregivers. This makes them superior to human caregivers who get tired, depressed, get angry and have many other responsibilities.

There are, of course, concerns about using robot caregivers, such as about their safety and effectiveness. In the case of caregiving robots that are intended to provide companionship and not just medical and housekeeping services, there are both practical and moral concerns.

There are at least two practical concerns regarding the companion aspect of such robots. The first is whether a human will accept a robot as a companion. In general, the answer seems to be that most humans will.

The second is whether the AI software will be advanced enough to read a human’s emotions and behavior to generate a proper emotional response. These responses might or might not include conversation. After all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test. They would need to read and respond correctly to human emotions. Since we humans often fail this test, this allows for a broad margin of error. These practical concerns can be addressed technologically as they are a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things. The comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion reassuring to all the senses.

While the practical problems can be solved with the right technology, there are moral concerns about the use of robot caregiver companions. One is about people handing off their moral duties to care for family members, but this is not specific to robots. After all, a person can hand off their duties to another person, and this would raise a similar issue.

As far as those specific to companion robots, there are moral concerns about the effectiveness of the care. Are robots good enough at their jobs that trusting the lives of humans to them  would be morally responsible? While that question is vitally important, a rather intriguing moral concern is that robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit, and such a deceit seems morally wrong.

One obvious response is that even if people know the robot does not really experience emotions, they can still gain value from its “fake” companionship. People often find stuffed animals emotionally reassuring even though they know they are just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect. If someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to feel the emotions they display and that they understand. In philosophical terms, humans have (or are) minds and the robots in question do not. They merely create the illusion of having a mind.

Philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior).

The usual “solution” to the problem is to embrace what seems obvious: I think other people have minds by an argument from analogy. I am aware of my own mental states and behavior, and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others, I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in various forms of deceit. For example, a person can fake being in pain or make a claim about being in love that is untrue. Piercing these deceptions can sometimes be difficult since humans can be skilled deceivers. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way they want people to believe they are thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it purports to be displayed by its behavior, because it does not think or feel. Or so it is believed. A reason that we think robots do not think or feel is because we can examine the robot and not see any emotions or thought in there. The robot, however complicated, is just a material machine and taken as incapable of thought or feeling.

Long before robots, there were thinkers who claimed that we humans  are purely material beings and that a suitable understanding of our mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes our thoughts and emotions and understand the hardware it “runs” on.  

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par as both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body, that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit.  Going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does, and this will yield all the benefits of having a human companion.

In June 2015 the United States Supreme Court ruled in favor of the legality of same-sex marriage. Many states had already legalized it and most Americans thought it should be legal. As such, the ruling was consistent both with the constitution and with the democratic ideal of majority rule. There are, of course, those who objected to the ruling and are even now working to undo it.

Some claim that the court acted contrary to democratic rule by engaging in judicial activism. Not surprisingly, some of those who made this claim had no complaints when the court ruled in ways they liked, despite the general principles being the same (that is, the court ruling in ways contrary to what voters had decided).  I do see the appeal of principled and consistent arguments against the Supreme Court engaging in activism and overruling what the voters have decided. But I rarely see such arguments as most people follow the principle that they like what they like. However, my concern here is with another avenue of dissent against the decision, namely that this ruling infringes on religious liberty.

The argument from religious liberty is an interesting one. On intriguing aspect is that the argument is made in terms of religious liberty rather than the older tactic of openly attacking gay people for alleged moral wickedness. This change of tactic seems to show a recognition that most Americans accept or at least tolerate their fellow gay Americans. As such, this tactic acknowledges a changed world. This change also represents clever rhetoric: the intent is not to deny some people their rights, but to protect religious liberty. Protecting liberty sells better than denying rights. While protecting liberty is commendable, the obvious question is whether the legalization of same-sex marriage infringes on religious liberty.

In general, there are two ways to infringe on liberty. The first is by forbiddance. That is, preventing a person from exercising their freedom. For example, the liberty of free expression can be infringed by preventing a person from freely expressing their ideas. For example, the state might impose penalties for people who criticize the leader.

The second is by force. This is compelling for people to act against their will. For example, having a law that requires people to dress in a certain way when they do not wish to do so. As another example, having a law that would compel people to praise the great leader. Since some people consider entitlements to fall under liberties, another way a person could have liberty infringed upon is to be denied her entitlements. For example, the liberty of education in the United States entitles children to a public education. Hence, taking away public education would be an imposition on that right.

It is important to note that not all cases of forbidding or forcing are violations of liberties. This is because there are legitimate grounds for limiting liberties, such as the principle of harm. For example, it is not a violation of a person’s liberty to prevent him from texting death threats to his ex-wife. As another example, it is not a violation of a person’s liberty to require her to have a license to drive a car.

Given this discussion, for the legalization of same-sex marriage to impose on religious liberty would require that it wrongfully forbids religious people from engaging in religious activities, wrongfully forces religious people to engage in behavior contrary to their religion or wrongfully denies religious people entitlements connected to their religion.

The third one is the easiest and quickest to address: there does not seem to be any way that the legalization of same-sex marriage denies religious people entitlements connected to their religion. While I might not have considered all the possibilities, I will move on to the first two.

On the face of it, the legalization of same-sex marriage does not seem to wrongfully forbid religious people from engaging in religious activities. To give some obvious examples, it does not forbid people from praying, attending religious services or saying religious things.

While some people have presented slippery slope “arguments” that this legalization will lead to such forbiddances, there is nothing in the ruling that indicates this or even mentions anything remotely like it. As with all such arguments, the burden of proof rests on those who claim that there will be this inevitable or likely slide. While inter-faith and inter-racial marriage are different matters, allowing these to occur was also supposed to lead to terrible things. None of these happened, which leads one to suspect that the doomsayers will be proven wrong yet again. As this is being written in 2025, none of the dire predictions came true. But perhaps we just need to wait another decade or ten.

But, of course, if a rational case can be made linking the legalization of same-sex marriage to real violations of religious liberty, then it would be reasonable to be worried. However, the linkage is based on psychological fear rather than logical support.

It also seems that the legalization of same-sex marriage does not force religious people to wrongfully engage in behavior contrary to their religion. While it is legal for same-sex couples to marry, this does not compel people to become gay and then gay-marry someone else who is (now) gay. Religious people are not compelled to like, approve of or even feel tolerant of same-sex marriage. They are free to dislike, disapprove, and condemn it. They are free to try to amend the Constitution to forbid same-sex marriage.

It might be argued that religious people are compelled to allow other people to engage in behavior that is against their professed religious beliefs, and this is a violation of religious freedom. An easy and obvious reply is that allowing other people to engage in behavior that is against one’s religion is not a violation of one’s religious liberty. This is because religious liberty is not supposed to be the liberty to impose one’s religion on others, but the liberty to practice one’s religion.

For example, the fact that I am at liberty to eat pork and lobster is not a violation of the religious liberty of Jews and Muslims. The fact that some women can go out in public with their faces exposed is not a violation of the religious liberty of some Muslims. The fact that people can have religions other than Christianity is not a violation of the religious liberty of Christians. As such, the fact that same-sex couples can legally marry does not violate the religious liberty of anyone.

It might be objected that it will eventually violate the religious liberty of some people. Some argued that religious institutions will be compelled to perform same-sex weddings (as they might be compelled to perform inter-racial or inter-faith marriages). This, I would agree, would be a violation of their religious liberty and liberty of conscience. Private, non-commercial organizations have every right to discriminate and exclude as that is part of their right of freedom of non-association. Fortunately, the legalization of same-sex marriage does not compel such organizations to perform these marriages. If it did, I would certainly oppose that violation of religious liberty.

It might also be objected that people in government positions would be required to issue same-sex marriage licenses, perform the legal act of marrying a same-sex couple, or recognize the marriage of a same-sex couple. People at the IRS would even be compelled to process the tax forms of same-sex couples.

The conflict between conscience and authority is nothing new and philosophers have long addressed this matter. Thoreau, for example, argued that people should follow their conscience and disobey what they regard as unjust laws.

This does have considerable appeal, and I agree that morality trumps law in terms of what a person should do. I should do what is right, even if the law requires that I do evil. This view is a necessary condition for accepting that laws can be unjust or immoral, which I accept. Because of this, I agree that a person whose conscience forbids her from accepting same-sex marriage has the moral right to refuse to follow the law. That said, the person should resign from her post in protest rather than simply refusing to follow the law. As an official of the state, the person has an obligation to perform their job and must choose between keeping that job and following their conscience. That said, I am certainly open to moral arguments for people refusing to follow the law while also refusing to quit. One could, for example, advance a utilitarian argument for such  action. Naturally, a person also has the right to try to change what they think is an immoral law.

I have the same view about people who see interracial marriage as immoral: they should follow the dictates of their conscience and not take a job that would require them to, for example, issue marriage licenses. However, their right to their liberty of conscience does not override the rights of other citizens to marry. That is, their liberty does not morally warrant denying the liberty of others.

It could be argued that same-sex marriage should be opposed because it is objectively morally wrong and this would apply to officials. This line of reason has appeal because what is objectively wrong should be opposed, even if it is the law. For example, when slavery was legal in the United States it should have been opposed by everyone, even officials.  But, arguing against same-sex marriage on moral grounds is a different from arguing against it on the grounds that it allegedly violates religious liberty.

It could be argued that the legalization of same-sex marriage violates the religious liberty of people in businesses such as baking wedding cakes, planning weddings, photographing weddings and selling wedding flowers.

The legalization of same-sex marriage does not, by itself, forbid businesses from refusing to do business involving a same-sex marriage. Legal protection against that sort of discrimination is another, albeit related, matter. This sort of discrimination has also been defended on the grounds of freedom of expression, which I have addressed at length in other essays.

In regard to religious liberty, a business owner certainly has the right to not sell certain products or provide certain services that go against her religion. For example, a Jewish restaurant owner has the liberty to not serve pork. A devout Christian who owns a bookstore has the liberty to not stock the scriptures of other faiths or books praising same-sex marriage. An atheist t-shirt seller has the liberty to not stock any shirts displaying religious symbols. These are all matters of religious liberty.

I would also argue that religious liberty allows business owners to refuse to create certain products or perform certain services. For example, a Muslim free-lance cartoonist has the right to refuse to draw cartoons of Muhammad. As another example, an atheist baker has the moral right to refuse to create a cake with a cross and quotes from scripture.

That said, religious liberty does not seem to grant a business owner the right to discriminate based on her religion. For example, a Muslim who owns a car dealership has no right to refuse to sell cars to women (or women who refuse to fully cover themselves). As another example, a militant homosexual who owns a bakery has no right to refuse to sell cakes to straight people.

Thus, the legalization of same-sex marriage does not violate religious liberty, at least from a moral perspective.

The United States has libertarian and anarchist threads, which is appropriate for a nation that espouses individual liberty and expresses distrust of the state. While there are many versions of libertarianism ranging across the political spectrum, I will focus on the key idea that the government should impose minimal limits on individual liberty and that there should be little, if any, state regulation of business. These principles were laid out clearly by American anarchist Henry David Thoreau in his claims that the best government governs least (or not at all) and that government only advances business by getting out of its way.

I must admit that I find this libertarian-anarchist approach appealing. Like many politically minded young folks, I experimented with a variety of political theories in college. I found Marxism unappealing because, as a metaphysical dualist, I must reject materialism. Also, I was aware of the brutally oppressive and murderous nature of many “Marxist” states, and they were in direct opposition to both my ethics and my view of liberty. Fascism was certainly right out, for reasons that are obvious to anyone who is not evil.

Since, like many young folks, I thought I knew everything and did not want anyone to tell me what to do, I picked anarchism as my theory of choice. Since I am morally opposed to murdering people, even for a just cause, I sided with the non-murderous anarchists such as Thoreau. I eventually outgrew anarchism, but I still have many fond memories of my halcyon days of naïve political views. As such, I do really like libertarian-anarchism and really wanted it to be viable. But I liking something does not entail that it is viable or a good idea.

Put in general terms, a libertarian system would have a minimal state with extremely limited government impositions on personal liberty. The same minimalism would also extend to the realm of business; they would operate with little or no state control. Since such a system seems to maximize liberty and freedom, they can seem initially very appealing. After all, freedom and liberty are good and more of a good thing is better than less. Except when it is not.

It might be wondered how more liberty and freedom is not always better than less. Two stock answers are both appealing and plausible. One was laid out by Thomas Hobbes. In discussing the state of nature (which is a form of anarchism as there is no state) he notes that total liberty (the right to everything) amounts to no right at all. This is because everyone is free to do anything, and everyone has the right to claim (and take) anything. This leads to his infamous war of all against all, making life “nasty, brutish and short.” Like too much oxygen, too much liberty can be fatal. Hobbes’ proposed solution is the social contract and the sovereign.

A second answer was present by J.S. Mill. In his discussion of liberty, he argued that liberty requires limitations on liberty. While this might seem like a paradox or a slogan from Big Brother, Mill is quite right in a straightforward way. For example, your right to free expression requires limiting my right to silence you. As another example, your right to life requires limits on my right to kill. As such, liberty does require restrictions on liberty. Mill does not limit the limiting of liberty to the state as society can impose such limits as well.

Given the plausibility of the arguments of Hobbes and Mill, it seems reasonable to accept that there must be limits on liberty for there to be liberty. Libertarians, who usually fall short of being true anarchists, do accept this. However, they do want the broadest possible liberties and the least possible restrictions on business. At least for themselves.

In theory, this would appear to show that the theory provides the basis for a viable political system. After all, if libertarianism is the view that the state should impose the minimal restrictions needed to have a viable society, then it would be (by definition) a viable system. However, there is the matter of libertarianism in practice and the question of what counts as a viable political system.

Looked at in a minimal sense, a viable political system would seem to be one that can maintain its borders and internal order. Meeting these two minimal objectives seem possible for a libertarian state, at least for a while. That said, the standards for a viable state might be taken to be higher, such as the state being able to (as per Locke) protect rights and provide for the good of the people. It can (and has) been argued that such a state would need to be more robust than the libertarian state. It can also be argued that a true Libertarian state would either devolve into chaos or be forced into abandoning libertarianism.

In any case, the viability of libertarian state would seem to depend on two main factors. The first is the ethics of the individuals composing the state. The second is the relative power of the individuals. This is because the state is supposed to be minimal, so limits on behavior must be imposed largely by other factors.

In regards to ethics, good people can often be relied on to self-regulate their behavior to the degree they are moral and have self-control. To the degree that the population is moral the state does not need to impose limitations on behavior, since the citizens will generally not behave in ways that require imposing the compulsive power of the state. As such, liberty would seem to require a degree of morality on the part of the citizens that is inversely proportional to the limitations imposed by the state. Put roughly, good people do not need to be coerced by the state into being good. As such, a libertarian state can be viable to the degree that people are morally good. While some thinkers (such as Mencius) have faith in the basic decency of people, many (such as Hobbes) regard humans as lacking in what others would call goodness. Hence, the usual arguments about how the moral failings of humans require the existence of the coercive state.

In regards to the second factor, having liberty without an external coercive force maintaining it would require that the citizens be comparable in political, social and economic power. If some people have greater power, they can easily use it to impose on their fellow citizens. While the freedom to act with few (or no) limits is wonderful for those with greater power, it is not good for those who have less power. In such a system, the powerful are free to do as they will, while the weaker people are denied their liberties. While such a system might be libertarian in name, freedom and liberty would belong to the powerful and the weaker would be denied. That is, it would be despotism or tyranny. Which is, one suspects, what some self-proclaimed libertarians want.

 If people are comparable in power or can form social, political and economic groups that are comparable in power, then liberty for all would be possible as individuals and groups would be able to resist the encroachments of others. Unions, for example, could be formed to offset the power of corporations. Not surprisingly, stable societies build such balances of power to avoid the slide into despotism and then to chaos. Stable societies also have governments that endeavor to protect the liberties of everyone by placing limits on how much people can inflict their liberties on others. As noted above, people can also be restrained by their ethics. If people and groups varied in power, yet abided by the limits of ethical behavior, then things could still go well for even the weak.

Interestingly, a balance of power might be disastrous. Hobbes argued that it is because people are equal in power that the state of nature is a state of war. This rests on his view that people are hedonistic egoists which means that they are selfish and do not care about other people.

Obviously enough, in the actual world people and groups vary greatly in power. Not surprisingly, many advocates of libertarianism enjoy considerable political and economic power. They would do very well in a system that removed many limitations on behavior since they would be freer to do as they wished and the weaker people and groups would be unable to stop them.

At this point, one might insist on a third factor that is beloved by the Adam Smith crowd: rational self-interest. The usual claim is that people would limit their behavior because of the consequences arising from their actions. For example, a business that served contaminated meat would soon find itself out of business because the survivors would stop buying the meat and spread the word. As another example, an employer who used his power to compel his workers to work long hours in dangerous conditions for low pay would find that no one would be willing to work for him and would be forced to improve things to retain workers. As a third example, people would not commit misdeeds because they would be condemned or punished by vigilante justice. The invisible hand would sort things out, even if people are not good and even if there is a great disparity in power.

The easy and obvious reply is that history shows that this never works as claimed. If there is a disparity in power, that power will be used to prevent negative consequences. For example, those who have economic power can use that power to coerce people into working for low pay and can also use that power to try to keep them from organizing to create a power that can resist this economic power. This is why, obviously enough, rich business owners usually oppose unions.

Interestingly, most people get that rational self-interest does not suffice to keep people from acting badly in the case of crimes such as murder, theft, extortion, assault and rape.

However, there is the view that rational self-interest will somehow work to keep people from acting badly in other areas. This, as Hobbes would say, arises from an insufficient understanding of humans. Or is a deceit on the part of people who have the power to do wrong and get away with it.

While I did like the idea of libertarianism, a viable libertarian society would require people who are predominantly ethical (and thus self-regulating) or a careful balance of power. Or, alternatively, a world in which people are rational and act from self-interest in ways that would maintain social order. This is clearly not our world, so libertarianism is not a viable system as I have defined it. To be fair and balanced, there are other definitions of viable systems and libertarianism could be viable under some of them.

My friend Ron claims that I do not drive. This is not true. I drive.  But I dive as little as possible. Part of it is me being frugal. I don’t want to spend more than I need on gas and maintenance. But most of it is that I hate to drive. Some of this is driving time is mostly wasted time and I would rather be doing something else. Some of it is that I find driving an awful blend of boredom and stress. The stress is because driving creates a risk of harming other people and causing property damage, so I am as hypervigilant driving as I am when target shooting at the range. If I am distracted or act rashly, I could kill someone by accident. Or they could kill me. As such, I am completely in favor of effective driverless cars. That said, it is certainly worth considering the implications of their widespread adoption. The first version of this essay appeared back in 2015 and certain people have been promising that driverless cars are just around the corner. The corner remains far away.

One major selling point of driverless cars is that they are supposed to be significantly safer than human drivers. This is for a variety of reasons, many of which involve the fact that the car will not get sleepy, bored, angry, distracted or drunk. If claimed significant increase in safety pans out, there will be significantly fewer accidents and this will have a variety of effects.

Since insurance rates are (supposed to be) linked to accident rates, one might expect that insurance rates will go down. In any case, insurance companies will presumably be paying out less, potentially making them even more profitable.

Lower accident rates also entail fewer injuries, which will be good for people who would have otherwise been injured in a car crash. It would also be good for those depending on these people, such as employers and family members. Fewer injuries also mean less use of medical resources, ranging from ambulances to emergency rooms. On the plus side, this could result in some decrease in medical costs and insurance rates. Or merely mean more profits for insurance companies, since they would be paying out less often. On the minus side, this would mean less business for hospitals, therapists and other medical personnel, which might have a negative impact on their income. Overall, though, reducing the number of injuries would be a moral good on utilitarian grounds.

A reduction in the number and severity of accidents would also mean fewer traffic fatalities. On the plus side, having fewer deaths seems to be a good thing. On the minus side, funeral homes will see their business postponed and the reduction in deaths could have other impacts on such things as the employment rate (more living people means more competition for jobs). However, I will take the controversial position that fewer deaths are probably good.

While a reduction in the number and severity of accidents would mean less and lower repair bills for vehicle owners, this also entails reduced business for vehicle repair businesses. Roughly put, every dollar saved in repairs (and replacement vehicles) by self-driving cars is a dollar lost by the people whose business it is to fix (and replace) damaged vehicles. Of course, the impact depends on how much a business depends on accidents as vehicles will still need regular maintenance and repairs. People will presumably still spend the money that they would have spent on repairs and replacements on other things, and this would shift the money to other areas of the economy. The significance of this would depend on the amount of savings resulting from the self-driving vehicles.

Another economic impact of self-driving vehicles will be on those who make money driving other people around. If my truck is fully autonomous, rather than take an Uber to the airport, I could have my own truck drop me off and drive home. It can come get me when I return. People who like to drink to the point of impairment will also not need cabs or services like Uber—their own vehicle can be their designated driver. A new sharing economy might arise, one in which your vehicle is out making money while you do not need it. People might also be less inclined to use airlines, trains or the bus. If your car can safely drive you to your destination while you sleep, play video games, read or even exercise, then why go through annoying pat downs, cramped seating, delays or cancellations?

As a final point, if self-driving vehicles operate within the traffic laws automatically, then  revenue from tickets and traffic violations will be reduced significantly. Since vehicles will be loaded with sensors and cameras, they will have considerable data with which to dispute any unjust tickets. Parking revenue (fees and tickets) might also be reduced as it  could be cheaper for a vehicle to just circle around or drive home than to park. This reduction in revenue could have a significant impact on municipalities and they would need to find alternative sources of revenue. Or come up with new violations that self-driving cars cannot counter. Alternatively, the policing of roads might be significantly reduced. After all, if there were far fewer accidents and few violations, then fewer police would be needed on traffic patrol. This would allow officers to engage in other activities or allow a reduction of the size of the force. The downside of force reduction would that the former police officers would be out of a job.

If all vehicles become fully self-driving, there might no longer be a need for traffic lights, painted lane lines or signs in the usual sense. Perhaps cars would be pre-loaded with driving data or there would be “broadcast pods” providing data to them as needed. This could result in savings, although there would be the corresponding loss to those who sell, install and maintain these things.

Based on the past, I am predicting that I will revisit this essay again in another decade years, noting once again that driverless cars are the transportation of the future. And always will be.