In my previous essays on sexbots I focused on versions that are mere objects. If a sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots lack the moral status needed to be wronged. The sexbots of the near future will, barring any sudden and unexpected breakthroughs in AI, still be objects. However, science fiction includes intelligent, human-like robots (androids). Intelligent beings, even artificial ones, would seem likely to be people. In terms of sorting out when a robot should be treated as person, one test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

While Descartes does not deeply explore the moral distinctions between beings that talk (which have minds on his view) and those that merely make noises, it does seem reasonable to take a being that talks as a person and grant it the appropriate moral status This provides a means to judge whether an advanced sexbot is a person: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would lack moral status.

Having sex with a sexbot that can pass the Cartesian test would seem morally equivalent to having sex with a human person. As such, whether the sexbot freely consented would be morally important. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is done). If such sexbots were mistreated, this would be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition. That is, it is not whether one is made of silicon or carbon that matters.

It might be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and it would still be reasonable to hold that it is not a person. It would seem to be a person but would merely be acting like a person. While this is worth considering, the same sort of argument can be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the problem of other minds:  I can see your behavior but must infer that you are self-aware based on an analogy to myself. Hence, I do not know that you are aware since I am not you. And, unlike Bill Clinton, I cannot feel your pain. From your perspective, the same is true about me: unless you are Bill Clinton, you cannot feel my pain. It such, if a robot acted in an intelligent manner, it would have to be classified as being a person on these grounds. To fail to do so would be a mere prejudice in favor of the organic over the electronic.

In reply, some people believe other people should be used as objects. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are evil.

Those with religious inclinations would probably bring up the matter of the soul. But the easy reply is that we will have as much evidence that robots have souls as we now do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product as a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do usually onerous tasks, but to the degree they are intelligent, they would be slaves. And enslavement is wrong.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

As a rule, any technology that can be used for sex will be used for sex. Even if it shouldn’t. In accord with this rule, researchers and engineers have been improving sexbot technology. By science-fiction standards, current sexbots are crude and are probably best described as sex dolls rather than sexbots. But it wise to keep ethics ahead of the technology and a utilitarian approach to this matter is appealing.

On the face of it, sexbots could be seen as nothing new and now they are a small upgrade of sex dolls that have been around for quite some time. Sexbots are, of course, more sophisticated than the infamous blow-up sex dolls, but the idea is the same: the sexbot is an object that a person has sex with.

That said, one thing that makes sexbots morally interesting is the fact that they are often designed to mimic humans not just in physical form (which is what sex dolls do) but also the mind. For example, the 2010 Roxxxy sexbot’s main feature is its personality (or, more accurately, personalities). As a fictional example, the sexbots in Almost Human do not merely provide sex—they also provide human-like companionship. However, such person-like sexbots are still science-fiction and so human-mimicking sexbots can be seen as something potentially new under the ethical sun.

An obvious moral concern is that human-mimicking sexbots could have negative consequences for humans, be they men or women. Not surprisingly, many of these concerns are analogous to existing moral concerns about pornography.

Pornography, so the stock arguments go, can have strong negative consequences. One is that it teaches men to see women as mere sexual objects. This can, it is claimed influence men to treat women poorly and can affect how women see themselves. Another point of concern is the addictive nature of pornography as people can become obsessed with it to their detriment.

Human-mimicking sexbots would seem to have the potential to be more harmful than pornography. After all, while watching pornography allows a person to see other people treated as mere sexual objects, a sexbot would allow a person to use a human-mimicking object sexually. This might have a stronger conditioning effect on the person using the object, perhaps habituating them to see people as mere sexual objects and increasing the chances they will mistreat people. If so, selling or using a sexbot would be morally wrong.

People might become obsessed with their sexbots, as some do with pornography. Then again, people might simply “conduct their business” with their sexbots and get on with life. If so, sexbots might be an improvement over pornography.  After all, while a guy could spend hours watching pornography, he would presumably not last very long with his sexbot.

Another concern raised about some types of pornography is that they encourage harmful sexual views and behavior. For example, violent pornography is believed to influence people to become more inclined to violence. As another example, child pornography is supposed to have an especially pernicious influence. Naturally, there is the concern about causation here: do people seek such porn because they are already that sort of person or does the porn influence them to become that sort of person? I will not endeavor to answer this here.

Since sexbots are objects, a person can do whatever they wish to their sexbot—hit it, burn it, and “torture” it and so on. Presumably there will also be specialty markets catering to unusual interests, such as those of pedophiles and necrophiliacs. If pornography that caters to these “tastes” can be harmful, then presumably being actively involved in such activities with a human-mimicking sexbot would be even more harmful. The person might be, in effect, practicing for the real thing. So, it would seem that selling or using sexbots, especially those designed for harmful “interests” would be immoral.

Not surprisingly, these arguments are also like those used against violent video games. Volent video games are supposed to influence people so that they are more likely to engage in violence. So, just as some have proposed restrictions on virtual violence, perhaps there should be strict restrictions on sexbots.

When it comes to video games, one plausible counter is that while violent video games might have negative impact on some people, they allow most people to harmlessly enjoy virtual violence. This seems analogous to sports and non-video games: they allow people to engage in conflict and competition in safer and less destructive ways. For example, a person can indulge her love of conflict and conquest by playing Risk or Starcraft II after she works out her desire for violence by sparring a few rounds in the ring.

Turning back to sexbots, while they might influence some people badly, they might also provide a means by which people could indulge in desires that would be wrong, harmful and destructive to indulge with another person. So, for example, a person who likes to engage in sexual torture could satisfy her desires on a human-mimicking sexbot rather than an actual human. The critical issue here is whether indulging in such virtual vice with a sexbot would be a harmless dissipation of these desires or fuel them and make a person more likely to inflict them on people. If sexbots did allow people who would otherwise harm other people to vent their “needs” harmlessly on machines, then that would seem good for society. However, if using sexbots would simply push them towards doing such things for real and with unwilling victims, then that would be bad. This, then, is a key part of addressing the ethical concerns about sexbots and something that should be duly considered before mass production begins.

Many years ago, the sci-fi buddy cop show Almost Human episode on sexbots inspired me to revisit the ethics of sexbots. While the advanced, human-like models of the show are still fictional, the technological foundations needed for sexbots do exist, as companies are manufacturing humanoid robots. As such, it seems well worth considering, once again, the ethical issues involving sexbots real and fictional.

At this time, sexbots are mere objects—while usually made to look like humans, they do not have the qualities that would make them even person-like. As such, ethical concerns about these sexbots do not involve concerns about wrongs done to the objects—presumably they cannot be wronged. But by using Kant’s discussion of ethics and animals, it is possible to build a moral view of even basic sexbots that are indisputably objects.

In his ethical theory Kant is clear that animals are means rather than ends and are mere objects. Rational beings, in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) choose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims we have no direct duties to animals. Despite being living beings, they are also just among the “objects of our inclinations” that derive value from the value we give them. Sexbots would, obviously, qualify as paradigm “objects of our inclinations.”

While it might seem odd, Kant argues that we should treat animals well. However, he does so while also trying to avoid giving animals any moral status of their own. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would obligate us to that human, then an animal doing a similar thing would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (because, according to Kant, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot a old dog that has become a burden?

Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will probably be damaged. Since, as Kant sees it, humans have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle in his handling of a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are practice for us: how we treat them habituates us in how we treat human beings.

Current sexbots obviously lack any meaningful moral status of their own. They do not feel or think—they are mere machines that might be made to look like humans. As such, they lack all qualities that might give them a moral status of their own.

Oddly enough, sexbots could be taken as being comparable to animals, at least as Kant sees them. After all, for him animals are mere objects and have no moral status of their own. Likewise for sexbots. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well, and not just because he is very dead. This might also apply to sexbots. That is, perhaps it makes no sense to talk about good or bad relative to such objects. Thus, a key issue is whether sexbots are more like animals or more like stones—at least in terms of the matter at hand.

If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person involved. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also extend to sexbots. For example, if engaging in certain activities with a sexbot would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with a sexbot would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior. It is also worth considering that perhaps people should not engage in any behavior with sexbots—that having sex of any kind with a bot would be damaging to the person’s humanity.

Interestingly enough (or boringly enough), this sort of argument is often employed to argue against people watching pornography. The gist of such arguments is that viewing pornography can condition people (typically men) to behave badly in real life or at least have a negative impact on their character. If pornography can have this effect, then it seems reasonable to be concerned about the potential impact of sexbots on people. After all, pornography casts a person in a passive role viewing other people acting as sexual objects, while a sexbot allows a person to have sex with an actual sexual object.

Some ages get cool names, such as the Iron Age or the Gilded Age. Others have less awesome names. An excellent example of the latter is the designation of our time as the Awkward Age. Since philosophers are often willing to cash in on trends, it is not surprising that there arose a philosophy of awkwardness.

Various arguments have been advanced in support of the claim that this is the Awkward Age. Not surprisingly, one was built on the existence of so many TV shows and movies centered on awkwardness. There is a certain appeal to this sort of argument and the idea that art expresses the temper, spirit, and social conditions of its age is an old one. I recall, from an art history class I took as an undergraduate, this approach to art. For example, the massive works of the ancient Egyptians are supposed to reveal their views of the afterlife as the harmony of the Greek works was supposed to reveal the soul of ancient Greece.

Wilde, in his dialogue “The New Aesthetics” considers this. Wilde takes the view that “Art never expresses anything but itself.” Naturally enough, Wilde provides an account of why people think art is about the ages. His explanation is best put by Carly Simon: “You’re so vain, I’ll bet you think this song is about you.” Less lyrically, the idea is that vanity causes people to think that the art of their time is about them. Since the people of today were not around in the way back times of old, they cannot say that past art was about them—so they claim the art of the past was about the people of the past. This does have the virtue of consistency.

While Wilde does not offer a decisive argument, it does have a certain appeal. It also is worth considering that it is problematic to draw an inference about the character of an age from what TV shows or movies happen to be in vogue with a certain circle (there are, after all, many shows and movies that are not focused on awkwardness). While it is reasonable to draw some conclusions about that specific circle, leaping beyond to the general population and the entire age would be quite a jump. There are many non-awkward shows and movies that could be presented as contenders to defining the age. It seems sensible to conclude that it is vanity on the part of the members of such a circle to regard what they like as defining the age. It could also be seen as a hasty generalization—people infer that what they regard as defining must also apply to the general population.

A second, somewhat stronger, sort of argument for this being the Awkward Age is based on claims about extensive social changes. To use an oversimplified example, consider the case of gender in the United States. The old social norms were presented in terms of two roughly defined genders and sets of rules about interaction. Or so the older folks say to the kids of today. Such rules included that the man asked the woman out on the date and paid for everything. Or so the older folk say. Now, or so the argument goes, the norms are in disarray or have been dissolved. Sticking with gender, Facebook recognized over 50 genders complicates matters. Going with the dating rules once again, it is no longer clear who is supposed to do the asking and the paying. And, of course, this strikes some as a problem that will doom civilization.

In terms of how this connects to awkwardness, the idea is that when people do not have established social norms and rules to follow, ignorance and error can easily lead to awkward moments. For example, there could be an awkward moment on a date when the check arrives as the two people try to sort out who pays: Dick might be worried that he will offend Jane if he pays and Jane might be expecting Dick to pick up the tab—or she might think that each should pay their own tab. Or perhaps Jane is a vampire and plans to kill Dick, albeit awkwardly.

To use an analogy, consider playing a new and challenging video game. When a person first plays, she will be trying to figure out how the game works, and this will usually result in many failures. By analogy, when society changes, it is like being in a new game and one does not know the rules. Just as a person can look for guides to a new game online (like YouTube videos on how to beat tough fights in video games), people can try to find guides to behavior. However, new social conditions mean that such guides are not yet available or, if they are, they might be unclear or conflict with each other. For example, a person who is new to contemporary dating might try to muddle through on her own or try to do some research—most likely finding contradictory guides to correct dating behavior. And also running into bad advice and grifters galore.

Eventually, of course, the norms and rules will be worked out—as has happened in the past. Or, as we get older, we will pretend we worked out the norms and rules. Then we will complain about the youth. This indicates a point worth considering—today is obviously not the first time that society has undergone change, thus creating opportunities for awkwardness. As Wilde noted, our vanity contributes to the erroneous belief that we are special in this regard. That said, it could be contended that people today are reacting to social change in a way that is different and awkward. That is, this is truly the Age of Awkwardness. My own view is that this is one of many times of awkwardness—what has changed is the ability and willingness to broadcast awkward events. Plus we now have AI.

While terraforming and abortion are both subjects of moral debate, they would seem to have little else in common. However, some moral arguments used to justify abortion can be used to justify terraforming.

Briefly put, terraforming is the process of making a planet more earthlike. While this is still mostly science fiction, serious consideration has been given to how Mars, for example, might be changed to make it more compatible with terrestrial life. While there are some moral concerns with terraforming dead worlds, the major moral worries involve planets that already have life—or, at the very least, potential for the emergence of life. If a world needs to be terraformed for human habitation, such terraforming would almost certainly be harmful or even fatal for the indigenous life. While it can be argued that there might be cases in which terraforming benefits the local life, I will focus on terraforming that exterminates the local life. This could be called terminal terraforming.

One way to look at terminal terraforming is as analogous to abortion. As will be shown, there are some important differences between the two—but for now I will focus on the moral similarities.

One stock argument in favor of the moral acceptability of abortion is the status argument. While these arguments take various forms, the gist is that the termination of a pregnancy is morally acceptable on the grounds that the woman has a superior moral status to the aborted entity (readers are free to use whichever term they prefer—I try to use neutral terms to avoid begging the question). This argument is very similar to that used by St. Aquinas and St. Augustine to morally justify killing plants and animals for food. Roughly put, they argued that humans are superior to animals, so it is acceptable for us to harm them when we need to.

This argument can justify terminal terraforming: if the indigenous life has less moral status than the terraforming species, then it could be argued that the terraforming is morally acceptable. The status argument has many variations. One common version uses the notion of rights—the rights of the woman outweigh the rights (if any) of the aborted entity. This is because the woman has a superior moral status. This argument is also commonly used to justify killing animals for food or sport—while they (might) have some rights, the rights of humans’ trump those of animals.

In the case of terraforming, a similar appeal to rights could be used to justify terminal terraforming. For example, if humans need to expand to a world that has only single-celled life, then the rights of humans would outweigh the rights of those creatures.

Another version uses utilitarianism: the interests, happiness and unhappiness of the woman is weighed against the interests, happiness and unhappiness of the aborted entity. Those favoring this argument note that the interests, happiness and unhappiness of the woman far outweigh that of the aborted entity—usually because it lacks the capacities of an adult. Not surprisingly, this sort of argument is also used to justify the killing of animals. For example, it is often argued that the happiness people get from eating meat outweighs the unhappiness of the animals they consume.

As with the other status arguments, this can be used to justify terraforming. As with all utilitarian arguments, one must weigh the happiness and unhappiness of the involved parties. If the life on the planet to be terraformed has less capacities than humans in regard to happiness and unhappiness (such as world whose highest form of life is the alien equivalent of algae), then it would be morally acceptable for humans to terraform that world. Or so it could be argued.

The status argument is sometimes itself supported by an argument focusing on the difference between actuality and potentiality. While the entity to be aborted is a potential person (on some views), it is not an actual person. Since the woman is an actual person, she has the higher moral status. The philosophical discussions of the potential versus the actual are rather old and are a matter of metaphysics. However, the argument can be made without a journey into the metaphysical realm simply by using the intuitive notions of potentiality and actuality. For example, an actual masterpiece of painting has higher worth than the blank canvas and unused paint that constitute a potential masterpiece. This sort of argument can also be used to justify terraforming on worlds whose lifeforms are not (yet) people and, obviously enough, on worlds that merely have the potential of producing life.

While the analogy between the two has merit, there are obvious ways to try to break the comparison. One obvious point is that in the case of abortion, the woman is the “owner” of the body where the aborted entity used to live. It is this relation that is often used to morally warrant abortion and to provide a moral distinction between a woman choosing to have an abortion and someone else who kills the product of conception (again, I’m using neutral terms to avoid begging the question).

When humans arrive to terraform a world that already has life, the life that lives there already “owns” the world and hence humans cannot claim that special relation that would justify choosing to kill. Instead, the situation would be more like killing the life within another person and this would presumably change the ethics of the situation.

Another important difference is that while abortion (typically) kills just one entity, terraforming would (typically) wipe out entire species. As such, terraforming of this sort would be analogous to aborting all pregnancies and exterminating humans—as opposed to the termination of some pregnancies. This moral concern is, obviously enough, the same as the concern about human caused extinction here on earth. While people are concerned about the death of individual entities, there is the view that the extermination of a species is something morally worse than the death of all the individuals (that is, the wrong of extinction is not merely a sum of the wrong of all the individual deaths.

These considerations show that the analogy does have obvious problems. That said, there still seems to be a core moral concern that connects abortion and terminal terraforming: what (if anything) morally justifies killing on the grounds of (alleged) superior moral status?

My core aesthetic principle is that if I can do something, then it is not art. While this is (mostly) intended as humorous, it is well founded—I have no artistic talent. Despite this, or perhaps because of this, I taught Aesthetics for over two decades.

While teaching this class, I became very interested in two questions. The first was whether a person without any artistic talent could master the technical aspects of an art. The second was whether a person without any artistic talent could develop whatever it is that is needed to create a work of genius. Or, at a much lower level, a work of true art.

 While the usually philosophical approach would be to speculate and debate, I engaged philosophical blasphemy and undertook an empirical investigation. I would see if I could teach myself to draw. I would then see if I could teach myself to create art. I began this experiment in the August of 2012 and employed the powers of obsession that have served me so well in running. It turns out they also work for drawing—I have persisted in drawing, even when I had to scratch out sketches on scraps of paper using a broken pencil. Yes, I am like that.

While this experiment has just one subject (me), I have shown that it is possible for a person with no artistic talent to develop the technical skills of drawing. I have trained myself to become what I call a graphite technician. My skill is such that people say, “I like your drawings because I can tell who they are of.” That is, I have enough skill to create recognizable imitations. I refuse to accept any claims that I am an artist, because of the principle mentioned above. Fortunately, I also have an argument to back up this claim.

When I started my experiment, I demonstrated my lack of drawing ability to my students and asked them why my bad drawing of a capybara was not art. They pointed out the obvious—it did not look much like a capybara because it was so badly drawn. When asked if it would be art if I could draw better, they generally agreed. I then asked about just photocopying the picture I used as the basis for my capybara drawing. They pointed out the obvious—that would not be art, just a copy. This experiment began before the arrival of AI image generators, otherwise I might not have even bothered with the experiment.

One reason a photocopy would not be art is that it is a mere mechanical reproduction. When I draw a person well enough for others to recognize the subject, I am exhibiting a technical skill—I can re-create their appearance on paper using a pencil.  However, technical skill alone does not make the results art. After all, this technical skill can be exceeded by a camera or photocopier. Just as being able to scan and print a photo of a person does not make a person an artist, being able to create a reasonable facsimile of a person using a pencil and paper does not make a person an artist—just a graphite technician.

Why this is so can be shown by considering why a mechanical copy is not art: there is nothing in the copy that is not in the original (laying aside duplication defects). As such, the more exact the copy of the original, the less room there is for whatever it is that makes a work art. So, as I get better at creating drawings that look like what I am drawing, I get closer to being a human photocopier. I do not get closer to being an artist.

This sort of argument would seem to suggest that photography cannot be art—after all, the photographer is just a camera technician. One might note that an unaltered photograph merely captures an image of what is there. One counter to this is that a photographer (as opposed to a camera technician) adds something to the photograph (I do not mean digital or other manipulation). This seems to be their perspective—they select what they will capture. So, what makes the work art is not that it duplicates reality but that the photographer has added that something extra. This something extra is what makes the photograph art and distinguishes it from mere picture taking. Or so photographers tell me.

It could be countered that what I am doing is art. Going back to the time of the ancient Greeks, art was taken as a form of imitation and, in general, the better the imitation, the better the art. Of course, Plato was critical of art on this ground—he regarded it as a corrupting imitation of an imitation.

Jumping ahead to the modern era, thinkers like d’Alembert still regarded fine art as an imitation, typically an imitation of nature aimed at producing pleasure. However, his theory of art does leave an opening for a graphite technician like myself to claim the beret of the artist. d’Alembert defined “art” as “any system of knowledge reducible to positive and invariable rules independent of caprice or opinion.”  What I have done, like many before me, is learned the rules of drawing—geometry, shading, perspective and so on. As such, I can (by his definition) be said to be an artist.

Fortunately for my claim that I am not an artist, d’Alembert distinguishes between the fine arts and the mechanical arts. The mechanical arts involve rules that can be reduced to “purely mechanical operations.” In contrast, d’Alembert notes that while the “useful liberal arts have fixed rules any can transmit, but the laws of Fine Arts are almost exclusively from genius.”  What I am doing, as a graphite technician, is following rules. And, as d’Alembert claimed, “rules concerning arts are only the mechanical part…”

What I am missing, at least on d’Alembert’s theory, is genius. On my own view, I am missing the mysterious something extra. While I do not have a developed theory of “the extra”, I have a vague idea about what it is in the case of drawing. As I developed my technical skills, I got better at imitating what I saw and could cause people to recognize what I was imitating. However, an artist who draws goes beyond showing people what they can already see in the original. The artist can see in the original what others cannot and then enable them to see it in her drawing. All I can do is create drawings where people can see what they can already see. Hence, I am a graphite technician and not an artist. I do not claim this to be a proper theory of art—but it points vaguely in the direction of such a theory.

That said, the experiment continues. I intend to see if it is possible to learn how to add that something extra or if, as some claim, it is simply something a person has or does not have. As of this writing on March 19, 2026, I still lack that something extra. I am persisting in the face of AI image generators, although my own failure at creating art might provide some insight into why AI generated images are not art. AI has, however, changed one thing about my drawing. When I was good enough to create images people could recognize, I would do birthday drawings of people and post them on Facebook—the responses were generally favorable, and some people really appreciated the effort. The arrival of AI image generators changed this: people now assume images are AI generated and, of course, the drawings ceased to be valued. After all, someone can create a much better image in seconds using AI. I’ll write more about this in the future.

The drawing pictured is of my husky, Isis, whio died in 2016. This is the only drawing I have saved; I compost my drawings.

 

Doubling down occurs when a person is confronted with evidence against a belief and their belief, rather than being weakened, is strengthened.A plausible explanation of doubling down rests on Leon Festinger’s classic theory of cognitive dissonance. When a person has a belief that is threatened by evidence, she has two main choices. The first is to adjust her belief in accord with the evidence. If the evidence is plausible and strongly supports the inference that the belief is false, then it is rational to reject the old belief. If the evidence is not plausible or does not strongly support the inference that the belief is false then it is rational to stick with the threatened belief on the grounds that the threat is not much of a threat.

Assessment of what is plausible evidence can be problematic. In general terms, assessing evidence involves considering how it matches one’s own observations, one’s background information, and credible sources. This assessment can push the matter back: the evidence for the evidence also needs to be assessed, which fuels classic skeptical arguments about the impossibility of knowledge. Every belief must be assessed, which leads to an infinite regress, thus making knowing whether a belief is true impossible. Naturally, retreating into skepticism will not help when a person is responding to evidence against a beloved belief (unless the beloved belief is a skeptical one)—the person wants her beloved belief to be true. As such, someone defending a beloved belief needs to accept that there is some support for the belief—even if the basis is faith or revelation.

In terms of assessing the reasoning, this is objective if it is deductive logic.  Deductive logic is such that if an argument is doing what it is supposed to do (be valid), then if the premises are true, then the conclusion must be true. Deductive arguments can be assessed by such things as truth tables, Venn diagrams and proofs; thus, the reasoning is objectively good or bad. Inductive reasoning is different. While the premises of an inductive argument are supposed to support the conclusion, inductive arguments are such that true premises only make (at best) the conclusion likely to be true. Inductive arguments vary in strength and while there are standards for assessing them, reasonable people can disagree about the strength of an inductive argument. People can also embrace skepticism here, specifically the problem of induction: even when an inductive argument has all true premises and the reasoning is as good as inductive reasoning gets, the conclusion could still be false. The obvious problem with trying to defend a beloved belief with the problem of induction is that it also cuts against the beloved belief—while any inductive argument against the belief could have a false conclusion, so could any inductive argument for it. As such, a person who wants to hold to a beloved belief in a way that is justified would seem to need to accept argumentation. Naturally, a person can embrace other ways of justifying beliefs—the challenge is showing that these ways should be accepted. This would seem, ironically, to require argumentation.

A second option is to reject the evidence without honestly assessing it and rationally considering the logic of the arguments. If a belief is very important to a person, perhaps even central to her identity, then the cost of giving up the belief would be very high. If the person thinks (or feels) that the evidence and reasoning cannot be engaged fairly without risking the belief, then the person can reject the evidence and reasoning using various techniques of self-deception and bad logic (fallacies serve well here).

 This rejection has less psychological cost than engaging the evidence and reasoning but is not always consequence free. Since the person probably has some awareness of their self-deception, it needs to be psychologically “justified”, and this results in a strengthening of the commitment to the belief. There are many cognitive biases that help here, such as confirmation bias (seeking, interpreting, and remembering information to confirm existing beliefs) and other forms of motivated reasoning. These can be hard to defend against, since they derange the very mechanisms that are needed to avoid them.

One interesting way people “defend” beliefs is by categorizing the evidence against the beliefs and opposing arguments as unjust attacks, which strengthens their resolve in the face of perceived hostility. After all, people fight harder when they believe they are under attack. Some people even infer they must be right because they are being criticized. As they see it, if they were not right, people would not be trying to show that they are in error. One variation of this is when a person claims they must be right because everyone disagrees with them.

People also, as John Locke argued in his work on enthusiasm, take the strength of their feelings about a belief as evidence for its truth. When people are challenged, they often feel angry and this makes them feel even more strongly. Hence, when they “check” on the truth of the belief using the measure of feeling, they feel even stronger that it is true. However, how they feel about it (as Locke argued) is no indication of its truth. Or falsity.

As a closing point, one intriguing rhetorical tactic is to accuse a person who disagrees with you of being the one who is doubling down. This accusation, after all, comes with the insinuation that the person is irrationally holding to a false belief. A reasonable defense is to show that evidence and arguments are being used to support a belief. The unreasonable counter is to employ the very tactics of doubling down and refuse to accept such a response. That said, it is worth considering that one person’s double down is often another person’s considered belief. Or, as it might be put, I support my beliefs with facts and logic while my opponents double down.

While asteroid mining is still science fiction, companies are already preparing to mine the sky. While space mining sounds awesome, lawyers are murdering the awesomeness with legalize. Long ago, President Obama signed the U.S. Commercial Space Launch Competitiveness Act which seemed to make asteroid mining legal. The key part of the law is that “Any asteroid resources obtained in outer space are the property of the entity that obtained them, which shall be entitled to all property rights to them, consistent with applicable federal law and existing international obligations.” More concisely, the law makes it so that asteroid mining by U.S. citizens would not violate U.S. law.

While this would seem to open the legal doors to asteroid mining, there are still legal barriers, although the law is obviously make-believe and requires that people either are willing to follow it or the people with guns are willing to shoot people for not following it. Various space treaties, such as the Outer Space Treaty of 1967, do not give states sovereign rights in space. As such, there is no legal foundation for a state to confer space property rights to its citizens based on its sovereignty. However, the treaties do not seem to forbid private ownership in space—as such, any other nation could pass a similar law that allows its citizens to own property in space without violating the laws of that nation. Obviously enough, satellites are owned by private companies and this could set a precedent for owning asteroids, depending on how clever the lawyers are.

One concern is that if several nations pass such laws and people start mining asteroids, then conflict over valuable space resources will be all but inevitable. In some ways this will be a repeat of the past: the more technologically advanced nations engaged in a struggle to acquire resources in an area where they lack sovereignty. These past conflicts tended to escalate into wars, which is something that must be considered in the final frontier.

One way to try to avoid war over asteroids is new treaties governing space mining. This is, obviously enough, a matter that will be handled by space lawyers, governments, and corporations. Unless, of course, AI kills us all first. Then they can sort out asteroid mining.

While the legal aspects of space ownership are interesting, its moral aspects of ownership are also of concern. While it might be believed that property rights in space are entirely new, this is not the case. While the setting is different than in the original, the matter of space property matches the state of nature scenarios envisioned by thinkers like Hobbes and Locke. To be specific, there is an abundance of resources and an absence of authority. As it now stands, while no one can hear you scream in space, there is also no one who can arrest you for space piracy as long as you stay in space.

Using the state of nature model, it can be claimed that there are currently no rightful owners of the asteroids, or it could be claimed that we are all the rightful owners (the asteroids are the common property of all of humanity). 

If there are currently no rightful owners, then the asteroids are there for the taking: an asteroid belongs to whoever can take and hold it. This is on par with Hobbes’ state of nature—practical ownership is a matter of possession. As Hobbes saw it, everyone has the right to all things, but this is effectively a right to nothing—other than what a person can defend from others. As Hobbes noted, in such a scenario profit is the measure of right and who is right is to be settled by the sword.

While this is practical, brutal and realistic, it is a bit problematic in that it would, as Hobbes also noted, lead to war. His solution, which would presumably work as well in space as on earth, would be to have sovereignty in space. This would shift the war of all against all in space (of the sort that is common in science fiction about asteroid mining) to a war of nations in space (which is also common in science fiction). The war could, of course, be a cold one fought economically and technologically rather than a hot one fought with mass drivers and lasers.

If asteroids are regarded as the common property of humanity, then Locke’s approach could be taken. As Locke saw it, God gave everything to humans in common, but people must acquire things from the common property to make use of it. Locke gives a terrestrial example of how a person needs to make an apple her own before she can benefit from it. In the case of space, a person would need to make an asteroid her own to benefit from the materials it contains.

Locke sketched out a basic labor theory of ownership—whatever a person mixes her labor with becomes her property. As such, if asteroid miners located an asteroid and started mining it, then the asteroid would belong to them.  This does have some appeal: before the miners start extracting the minerals from the asteroid, it is just a rock drifting in space. Now it is a productive mine, improved from its natural state by the labor of the miners. If mining is profitable, then the miners would have a clear incentive to grab as many asteroids as they can, which leads to the moral problem of the limits of ownership.

Locke does set limits on what people can take in his proviso: those who take from the common resources must leave as much and as good for others. When describing this to my students, I always use an analogy to a party: since the food is for everyone, everyone has a right to the food. However, taking it all or taking the very best would be wrong (and rude). While this proviso is ignored on earth, the asteroids could provide us with a fresh start in terms of dividing up the common property of humanity. After all, no one has any special right to claim the asteroids—so we all have equal good claims to the resources they contain.

As with earth resources, some will contend that there is no obligation to leave as much and as good for others in space. Instead, those who get there first will contend that ownership should be on the principle of whoever grabs it first and can keep it is the “rightful” owner. Unless someone grabs it from them, then they would presumably see that as a cruel injustice.

Those who take this view would probably argue that those who get their equipment into space would have done the work (or put up the money) and (as argued above) would be entitled to all they can grab and use or sell. Other people are free to grab what they can, if they have access to the resources needed to mine the asteroids. Naturally, the folks who lack the resources to compete will remain poor—their poverty will, in fact, disqualify them from owning any of the space resources much in the way poverty effectively disqualifies people on earth from owning earth resources.

While the selfish approach will be appealing to those who can grab the asteroids, arguments can be made for sharing them. One reason is that those who will mine the asteroids did not create the means to do so from nothing. Reaching the asteroids will be the result of centuries of human civilization that made such technology possible. As such, there would seem to be a general debt owed to human civilization and paying this off would involve also contributing to the general good of humanity. Naturally, this line of reasoning can be countered by arguing that the successful miners will benefit humanity when their profits “trickle down” from space. Sadly, as on earth, gravity does not seem to affect money in terms of trickling it down. It always seems to go upwards.

Another way to argue for sharing the resources is to use an analogy to a buffet line. Suppose I am first in line at a buffet. This does not give me the right to devour everything I can with no regard for the people behind me. It also does not give me the right to grab whatever I cannot eat myself to sell it to those who had the misfortune to be behind me in line. As such, these resources should be treated in a similar manner, namely fairly and with some concern for those who are behind the first people in line.

Naturally, these arguments for sharing can be countered by the usual arguments in favor of selfishness. While it is tempting to think that the vastness of space will overcome selfishness (that is, there will be so much that people will realize that not sharing would be absurd and petty), this seems unlikely—the more there is, the greater the disparity is between those who have and those who have not. On this pessimistic view we already have all the moral and legal tools we need for space—it is just a matter of changing the wording a bit to include “space.”

In the previous essay on threat assessment, I looked at the influence of availability heuristics and fallacies related to errors in reasoning about statistics and probability. This essay continues the discussion by exploring the influence of fear and anger on threat assessment.

A rational assessment of a threat involves properly considering how likely it is that a threat will occur and, if it occurs, how severe the consequences might be. As might be suspected, the influence of fear and anger can cause people to engage in poor threat assessment that overestimates the likelihood or severity of a threat.

One starting point for anger and fear is the stereotype. Roughly put, a stereotype is an uncritical generalization about a group. While stereotypes are generally thought of as being negative (that is, attributing undesirable traits such as laziness or greed), there are also positive stereotypes. They are not positive in that the stereotyping itself is good. Rather, the positive stereotype attributes desirable qualities, such as being good at math or skilled at making money. While it makes sense to think that stereotypes that provide a foundation for fear would be negative, they often include a mix of negative and positive qualities. For example, a feared group might be cast as stupid and weak, yet somehow also incredibly cunning and dangerous.

Stereotyping leads to similar mistakes that arise from hasty generalizations in that reasoning about a threat based on stereotypes will often result in errors. The defense against a stereotype is to seriously inquire whether the stereotype is true or not.

Stereotyping is useful for demonizing. Demonizing, in this context, involves unfairly portraying a group as evil and dangerous. This can be seen as a specialized form of hyperbole in that it exaggerates the evil of the group and the danger it represents. Demonizing is often combined with scapegoating—blaming a person or group for problems they are not responsible for. A person can demonize on their own or be subject to the demonizing rhetoric of others.

Demonizing presents a clear threat to rational threat assessment. If a group is demonized successfully, it will be (by definition) seen as eviler and more dangerous than it really is. As such, both the assessment of the probability and severity of the threat will be distorted. For example, the demonization of Muslims by various politicians and pundits distorts threat assessments.

The defense against demonizing is like the defense against stereotypes—a serious inquiry into whether the claims are true. It is worth noting that what might seem to be demonizing might be an accurate description. This is because demonizing is, like hyperbole, exaggerating the evil of and danger presented by a group. If the description is true, then it would not be demonizing. Put informally, describing a group as evil and dangerous need not be demonizing. For example, descriptions of Isis as evil and dangerous were generally accurate. As are descriptions of evil and dangerous billionaires.  

While stereotyping and demonizing are rhetorical devices, there are also fallacies that distort threat assessment. Not surprisingly, one is scare tactics (also known as appeal to fear). This fallacy involves substituting something intended to create fear in the target in place of evidence for a claim. While scare tactics can be used in other ways, it can be used to distort threat assessment. One aspect of its distortion is the use of fear—when people are afraid, they tend to overestimate the probability and severity of threats. Scare tactics is also used to feed fear—one fear can be used to get people to accept a claim that makes them even more afraid.

One thing that is especially worrisome about scare tactics in the context of terrorism is that in addition to making people afraid, it is also routinely used to “justify” encroachments on rights, massive spending, and the abandonment of moral values. While courage is an excellent defense against this fallacy, asking two important questions also helps. The first is to ask, “should I be afraid?” and the second is to ask, “even if I am afraid, is the claim actually true?” For example, scare tactics has been used to “support” the claim that refugees should not be allowed into the United States. In the face of this tactic, one should inquire whether or not there are grounds to be afraid of refugees and also inquire into whether or not an appeal to fear justifies banning refugees.

It is worth noting that just because something is scary or makes people afraid it does not follow that it cannot serve as legitimate evidence in a good argument. For example, the possibility of a fatal head injury from a motorcycle accident is scary but is also a good reason to wear a helmet. The challenge is sorting out “judgments” based merely on fear and judgments that involve good reasoning about scary things.

While fear makes people behave irrationally, so does anger. While anger is an emotion and not a fallacy, it does provide the fuel for the appeal to anger fallacy. This fallacy occurs when something that is intended to create anger is substituted in place of evidence for a claim. For example, a demagogue might work up a crowd’s anger at illegal migrants to get them to accept absurd claims about building a wall along a massive border.

Like scare tactics, the use of an appeal to anger distorts threat assessment. One aspect is that when people are angry, they tend to reason poorly about the likelihood and severity of a threat. For example, a crowd that is enraged against illegal migrants might greatly overestimate the likelihood that the migrants are “taking their jobs” and the extent to which they are “destroying America.” Another aspect is that the appeal to anger, in the context of public policy, is often used to “justify” policies that encroach on rights and do other harms. For example, when people are angry about a mass shooting, proposals follow to limit gun rights that had no relevance to the incident in question. As another example, the anger at illegal migrants is often used to “justify” policies that will harm the United States. As a third example, appeals to anger are often used to justify policies that would be ineffective at addressing terrorism and would do far more harm than good.

It is important to keep in mind that if a claim makes a person angry, it does not follow that the claim cannot be evidence for a conclusion. For example, a person who learns that her husband is having an affair with an underage girl would probably be very angry. But this would also serve as good evidence for the conclusion that she should report him to the police and divorce him. As another example, the fact that illegal migrants are here illegally and knowingly employed by businesses because they can be more easily exploited than American workers can make someone mad, but this can also serve as a premise in a good argument in favor of enforcing (or changing) the laws.

One defense against appeal to anger is good anger management skills. Another is to seriously inquire into whether there are grounds to be angry and whether any evidence is offered for the claim. If all that is offered is an appeal to anger, then there is no reason to accept the claim based on the appeal.

The rational assessment of threats is important for practical and moral reasons. Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. There is also the concern about the harm of creating fear and anger that are unfounded. In addition to the psychological harm to individuals that arise from living in fear and anger, there is also the damage stereotyping, demonizing, scare tactics and appeal to anger do to society. While anger and fear can unify people, they most often unify by dividing—pitting us against them. I urge people to think through threats rather than giving in to the seductive demons of fear and anger.

When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”

Making rational decisions about dangers involves considering both factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is (for most people) not a severe threat. There is a low probability that there will be a mass shooting on my campus, but it is a high severity threat.

Our survival as a species seems to have been despite our poor skills at rational threat assessment. To be specific, the worry people feel about a threat generally does not match up with the probability of the threat occurring. People seem somewhat better at assessing severity, though we often get this wrong.

One excellent example of poor threat assessment is the fear Americans have about terrorism.  Between 1975 and 2025 3,577 Americans died as the result of terrorism, which accounted for .35% of all murders in the US in that time. If you are in the United States now, your odds of being killed in such an attack is about 1 in 4 million per year. This includes all forms of terrorism, although you would now be statistically most likely to be killed by right-wing terrorists.

While being killed by terrorists in the United States is unlikely, some people are terrified by the possibility (which is, of course, the goal of terrorism). Given that an American is more likely to be killed while driving than by a terrorist, it might be wondered why people are so bad at threat assessment. The answer, in terms of feeling fear vastly out of proportion to probability, involves a cognitive bias and some classic fallacies.

People (probably) follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use statistical methods, people often fall victim to the availability heuristic. This is when a person unconsciously assigns a probability based on how often they think of something. While something that occurs often is likely to be thought of often, thinking of something more often does not make it more likely to occur.

After an act of terrorism, people think about terrorism more often and tend to unconsciously believe that the chance of terror attacks occurring is higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is low (driving to the beach is much more likely to kill you). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite usually being the worst evidence.

People are also misled about probability by fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention will focus on that fact, often leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most veterans are terrorists because the media covered a terrorist who was a military veteran. If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a terrorist attack by Muslims in the United States. This is distinct from someone simply lying about, for example, Muslims and claiming they are terrorists because the person is a bigot or wants to exploit the fear they create.

Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy also occurs when someone rejects reasonable statistical data supporting a claim in favor of one example or small number of examples that go against the claim. This fallacy is like hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample. Out in the wild, it can be difficult to tell whether a fallacy is a hasty generalization or anecdotal evidence, fortunately what matters is recognizing a fallacy is a fallacy even if it is not clear which one it is.

People fall victim to this fallacy because stories and anecdotes usually have more emotional and psychological impact than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of an anecdote. Not surprisingly, people most often accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.

In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of terrorism or tell a story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring in the United States. For example, people point out that terrorists have masqueraded as refugees and infer that refugees in general present a major threat to the United States. Or they might tell the story about one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the entire visa system.

One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is exceptionally vivid or dramatic does not make the event more likely to occur, especially in the face of statistical evidence to the contrary.

People often accept this sort of “reasoning” because particularly vivid or dramatic cases usually make a very strong impression on the mind. For example, mass shootings are vivid and awful, so it is hardly surprising that people often feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more likely a person feels that it will occur. But the vividness of a harm has no connection to the probability it will occur.

That said, considering the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because hitting the ground because of a failed parachute would be very dramatic. If he knows that, statistically, the chances of the accident happening are very low but he considers even a small risk unacceptable, then he would not be committing this fallacy. This then becomes a matter of value judgment—how much risk a person is willing to tolerate relative to the severity of the potential harm.

The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.

Such rational assessment of threats is important for both practical and moral reasons. The matter of terrorism is no exception to this.  Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter an unlikely threat while spending little on major causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating unfounded fear. In addition to the psychological harm to individuals, there is also the damage to the social fabric. While creating unwarranted fear is useful for grifters, pundits and politicians, it is bad for the rest of us and thinking things through is a way to protect yourself from needless fear and those who wish to exploit it.