In my previous essay I sketched my view that self-defense is consistent with my faith, although the defense of self should prioritize protecting the integrity of the soul over the life of the body. A reasonable criticism of my view is that it seems inherently selfish: even though my primary concern is with acting righteously, this appears to be driven by a desire to protect my soul. Any concern about others, one might argue, derives from my worry that harming them might harm me. A critic could note that although I make a show of reconciling my faith with self-defense, I am merely doing what I have sometimes accused others of doing: painting over my selfishness and fear with a thin layer of theology. That, I must concede, is a fair point and I must further develop my philosophy of violence to address this. While it might seem odd, my philosophy of violence is built on love.

Being a philosopher, it is not surprising that I have been influenced by St. Augustine. While I differ with him on many points, I do agree that God is love. As it says in 1 John 4:8, “Whoever does not love does not know God, because God is love.” Because God is love, one must infer, He commands us to love each other. It would seem inconsistent for Him not to command this, and Leviticus 19:18 states, “Do not seek revenge or bear a grudge against anyone among your people, but love your neighbor as yourself. I am the Lord.” I have, as one might imagine, heard arguments that this command is limited to one’s own people and thus allows someone to hate those who are not their people and bear a grudge against them. Those who make such arguments contend that “their people” is narrowly defined, often by such factors as race and nationality. I have heard this specifically used to justify using cruelty and violence against migrants in the United States. However, God is clear in His view, for He tells us (Leviticus 19:34) that, “The foreigner residing among you must be treated as your native-born. Love them as yourself, for you were foreigners in Egypt. I am the LORD your God.” Not surprisingly, for God we are all our people and to act in good faith we must love our neighbor, no matter where they come from.  Jesus is also clear that we should love each other. John 13:34 states, “A new command I give you: Love one another. As I have loved you, so you must love one another.” And Matthew 22:39 states, “Love your neighbor as yourself.”

Jesus goes beyond merely commanding that we love our neighbors, he also famously asserts that we should love our enemies, saying in Matthew 5:43–44 that, “Ye have heard that it hath been said, thou shalt love thy neighbor, and hate thine enemy. But I say unto you, love your enemies, bless them that curse you, do good to them that hate you, and pray for them which despitefully use you, and persecute you.” He even addresses how we should respond to an attack, and in Matthew 5:38-39 we see that, “You have heard that it was said, ‘Eye for eye, and tooth for tooth.’ But I tell you, do not resist an evil person. If anyone slaps you on the right cheek, turn to them the other cheek also.” But how do I fit all this into my philosophy of violence? As I am not a theologian or professor of religious studies specializing in Christianity, I must write as a mere theological amateur but, fortunately, also as a professional philosopher.

As noted above, I agree with Augustine that God is love. I also agree with God and Jesus that I should love my neighbor as myself and even love my enemies. While this is a nice thing to say, there is the question of how this view shapes my philosophy of violence. The easy and obvious answer is that my response to and my own acts of violence must conform with loving others as if they were myself. As others have noted over the centuries, the command does not require me to love my neighbor (or enemy) more than I love myself, just as much as I love myself. And, of course, I am commanded to love others as God and Jesus do—which requires a great deal of me.

In terms of loving my neighbor as myself in the context of self-defense, this means that I must regard them with the same love that I have for myself; my self-love cannot alone justify me using violence even in self-defense. This is because my love for them must equal my love for myself. It is tempting to think that this love would entail I could not use violence in self-defense, but a case can be made for this.

As I argued in my discussion of the soul, protecting the soul from unrighteousness is more important than protecting the body from harm. To act from love seems to require that I protect those I love from harm and if someone is attempting to do something unrighteous and thus putting their soul in danger, then I would be justified in using violence to stop them. For example, if someone is trying to murder me, then I could use violence to stop them from committing the sin of murder. Acting from love would also require me to use minimal violence against them, but I could be justified in killing them if that was the only way to prevent murder. This would also seem to extend to protecting others. If, for example, someone was trying to murder you, I could justly use violence to stop them to protect your life and their soul. For those who consider all killing equally wrong, killing to prevent killing would seem to be impermissible.

At this point, a reader might be thinking how a wicked person might exploit my view. A wicked person could, one might argue, try to justify using violence by claiming they are trying to protect souls from what they regard as sins. For example, a migrant hating racist might try to justify using violence against those protesting ICE because they are “sinning” by defying the will of our mad king. Obviously, people trying to exploit religion and morality to “justify” their wickedness is nothing new and my reply is that this is not a special flaw in my philosophy of violence.

It could also be objected that my view could be used in good faith to justify violence against people who are truly seen as committing sins to protect their souls. For example, there are those who profess to be Christians and claim they sincerely want to “save” trans people and gay people from “sin.” Such a person could argue that on my theory violence could be used to intimidate and coerce people into ceasing their “sin.” This is certainly a reasonable concern as almost any religious or moral system could be used in this manner. For example, a utilitarian who sees being transgender as harmful could make a utilitarian case for using force against trans people, or a deontologist could profess to believe in a moral rule that allows such violence.

In reply, I recognize that this is a legitimate concern and people can, in good faith, try to justify actions that even those who share their faith or moral theory would see as wrong. But I would also argue that using violence in such ways would not be acting from love, which I take as the guiding principle of my faith. This is because acting from love while using violence requires that I do the least harm to someone else and that I would be willing to endure such harm myself.  It also requires, obviously enough, acting from love. We can, obviously enough, argue what it means to act from love, just as we can argue what it would mean to act from a moral principle. We will often be wrong, but we should do the best we can while reasoning and acting in good faith. But another limiting factor is that we are supposed to not merely love our neighbors as ourselves, but also to love each other as God and Jesus love us.

For those who believe that Jesus died for our sins, loving each other as Jesus loves us would require us to love others more than we love ourselves. This love would require us to make great sacrifices for others and would limit the violence we are allowed to do to one another.  It would, most likely, forbid us from any acts of violence. This does make sense of Jesus’ command to turn the other cheek; that would require loving someone more than one loves themselves. Having and acting on such love would require incredible strength, and one might fairly argue that this expects too much of most of us. This might explain why there is the command to love our neighbor as ourselves (which is hard, but certainly within our power) and two others to love each other as God and Jesus have loved us (which would be incredibly difficult).

Returning to the “machete that wasn’t” incident, I acted as I did because I was trying to act from love. Love required that I take the risk of not using violence immediately and that I try to talk to the person. It also required me to stay with him, to protect him and others. Fear is the enemy of love, so I am fortunate to have mastery of my fear. I do understand that it is easy to be ruled by fear and anger and allow them to silence love, but there are ways to address this, and our good dead friend Aristotle has some advice about this. In the next essay, I will look at my philosophy of violence in the context of virtue theory. Stay safe.

In his book Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human inhabited planets is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide than endure such contact. 

As this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. And, of course, people read science fiction and sometimes try to make it real (for good or for ill). But philosophers do love using science fiction for discussion, hence my use of The Naked Sun.

Everyone knows that smart phones allow unrelenting access to social media. One narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people together physically yet ignoring each other in favor of gazing at their smart phone lords and masters. As a professor, I see students engrossed by their phones. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else as all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the focus, namely robots. However, the reader should keep in mind the social isolation created by modern social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, social robots are relatively new. Sure, “robot” toys and things like Teddy Ruxpin have been around for a while, but reasonably sophisticated social robots are relatively new. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies who want to sell social robots are, unsurprisingly, very positive about the future of these robot companions. There are even some good arguments in their favor. Robot pets provide a choice for people with allergies, those who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person requires constant attention and monitoring that would be expensive, burdensome or difficult for other humans to supply. Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction. It has been claimed that people are emotionally short-changing themselves and those they are physically in favor of staying connected to social media. This seems to be a taste of what Asimov imagined in The Naked Sun: people who view but no longer see one another. Given the importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. Human-human social interactions can be seen as like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as consuming junk food or drugs in that it is addictive but leaves one ultimately empty and  always craving more.

It can be argued that this worry is unfounded and that social media is an adjunct to social interaction in the real world and that social interaction via like Facebook and X can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter has some appeal, social robots do seem to be relevantly different from past technology. While humans have had toys, stuffed animals and even simple mechanisms for company, these are different from social robots. After all, social robots aim to mimic animals or humans. A concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant, that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This could make robotic companions more appealing than human company. At least robots whose cost is not subsidized by advertising. Imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner would be able to make changes to the robot. A person could, quite literally, make a friend with the desired qualities and without any undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends. There is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this way.

Social robots, though they might break down or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful. Just turn it off and lock it down when leaving it alone.  Unlike human companions, robot companions do not impose burdens, they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but robotic companions would seem superior to humans in most ways. Or at least in terms of common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship. Will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you? However, these seem mostly technical problems involving software. Presumably all these could eventually be addressed, and satisfactory companions could be created. But there are still concerns.

One obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food and are superficially appealing but lacking in what is needed for health. However, if robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is one taken from virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to relying on robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions that are too easy would be analogous to going without physical exercise or challenges and one would become emotionally weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would lead to unhappiness. Because of this, one should carefully consider whether one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be morally fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.