One political narrative is the tale of the poor defrauding government programs. The (alleged) grifter Donald Trump, for example, claims that the poor commit a lot of fraud.  Fox News consistently claims, usually without evidence, that government programs aimed to help the poor are exploited by the poor. In most cases, the “evidence” presented in support of such claims seems to be that they feel that there must be a lot of fraud. However, there is little inclination to look for supporting evidence—if they feel strongly enough that a claim is true, that is good enough for them.

The claim that such aid is fraught  with fraud is often used to argue that it should be cut or even eliminated.  The idea is that the poor are “takers” who are fraudulently living off the “makers.” While fraud is wrong, it is important to consider some key questions.

The first question is this: what is the actual percentage of fraud that occurs in such programs? While, as noted above, some claim fraud is rampant, the statistical data tells another story.  In the case of unemployment insurance, the rate of fraud is estimated to be less than 2%. This is lower than the rate of fraud in the private sector. In the case of welfare, fraud is sometimes reported at being 20%-40% at the state level. However, the “fraud” seems to mostly errors by bureaucrats rather than fraud committed by the recipients. Naturally, an error rate so high is unacceptable—but is a different narrative than that of the wicked poor stealing from the taxpayers.

SNAP (Food stamp) fraud does occur—but it is mostly committed by businesses rather than the recipients.  While there is some fraud on the part of recipients, the best data indicates that such fraud accounts for about 1% of the payments. Given the rate of fraud in the private sector, that is exceptionally good.

Given this data, the overwhelming majority of those who receive assistance are not engaged in fraud. This is not to say that fraud should be ignored—in fact, it is the concern with fraud on the part of the recipients that has resulted in such low incidents of fraud. Interestingly, about one third of fraud involving government money involves not the poor, but defense contractors who account for about $100 billion in fraud per year. Medicare and Medicaid combined have about $100 billion in fraudulent expenditures per year. While there is also a narrative of the wicked poor in regards to Medicare and Medicaid, the fraud is usually perpetrated by the providers of health care rather than the recipients. As such, the focus on fraud should shift from the poor recipients of aid to defense contractors and to address Medicare/Medicaid issues. That is, it is not the wicked poor who are siphoning away money with fraud, it is the wicked wealthy who are stealing from the rest of us. As such the narrative of the poor defrauding the state is a flawed narrative. While it does happen, the overall level of fraud on the part of recipients seems to be less than 2%. Most of the fraud, contrary to the narrative, is committed by those who are not poor. While the existence of fraud does show a need to address that fraud, the narrative has cast the wrong people as villains.

While the idea of mass welfare cheating is unfounded, a good faith debate can be had as to whether people should receive support from the state. After all, even if most recipients are honestly following the rules and not engaged in fraud, there is still the question of whether the state should be providing welfare, food stamps, Medicare, Medicaid and similar such benefits. Of course, the narrative against helping citizens in need does lose much of its rhetorical power if you know the poor are not fraudsters. That dishonor goes to a wealthier class of people, which should be no surprise. After all, if the poor were engaged in the level of fraud attributed to them, they would no longer be poor.

Science fiction can sometimes predict the future and perhaps its intelligent machines will be real someday.  Since I have been rewriting some essays about sexbots lately, I will use them to focus the discussion. However, the discussion that follows also applies to other types of artificial intelligences.

Sexbots are intended to provide sex and sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped. Sorting this out requires a philosophy of consent. When it is claimed that sex without consent is rape, it is usually assumed that the victim of non-consensual sex could provide consent but did not. An example of this would be sexual assault against an unconscious person. But there are also cases in which a being cannot consent. This might be a factor of age or because the being is incapable of any form of consent. For example, a brain-dead human cannot give any type of consent but can be raped.

In other cases, a being that cannot give consent cannot be raped. As an obvious example, a human can have sex with a sex-doll and it cannot consent. But the doll is not being raped. After all, it lacks a status that would require consent. As such, rape (of this sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being for the sex to be morally acceptable. In some cases, while consent would be required, it cannot be granted.  the question would be whether a sexbot could have a status that would require consent.

As current sexbots are little more than advanced sex dolls, they are mere objects. As such, a person can own and have sex with this sort of sexbot without it being rape or slavery. However, as sexbots become more advanced, they might gain a moral status that would require that they provide consent. This leads to concerns about such machines being programmed to “consent”, which would not seem to be consent. But there is the question of how consent would work with a machine—what intentional states would it need to have to understand what it is consenting to and to engage in consent.

https://famu.zoom.us/meeting/register/kPbbUjbsTWayeb7ceb3HTw#/registration

On April 8, 2026 I’ll be participating in a debate on the question “will AI destroy higher education?” I’m taking the “no” side. It takes place on Zoom from 12:00-1:00 PM Eastern and you can register (free) here: Meeting Registration – Zoom.

As this is being written, I’m scheduled to debate whether AI will destroy higher education. I’m arguing that it will not and what follows is how I will make my initial case.

In supporting my position, I have optimistic and pessimistic arguments (although your perspective on optimism might differ from mine. I’ll begin with my optimistic arguments, the first two of which are analogical arguments.

One way that AI might destroy higher education is by making students, broadly speaking, incompetent. While the exact scenarios vary, the idea is that using or depending on AI will weaken the minds of students and thus doom higher education. Fortunately, this is an ancient argument that has repeatedly been disproven. Socrate, it is claimed, worried that writing would weaken minds. More recently, TV, calculators, computers and even the dreaded Walkman were supposed to reduce the youth to dunces. None of these dire predictions came to pass and, by analogy, we can conclude that AI will not make the youth into fools.

A related concern is that AI will destroy higher education by rendering it obsolete though radical economic change. While scenarios vary, the worry is that higher education will no longer be needed because AI will eliminate certain jobs. While AI might result in radical change, this is also nothing new and higher education will adapt, by analogy, as it has done in the past. This will be an evolutionary event rather than a mass extinction.

My third optimistic argument is in response to worries about cheating. While AI does provide a radical new way to cheat, cheating remains a moral (and practical) choice and is not inherently a technological problem. Good ethical training and practical methods can address this threat, allowing higher education to survive.

My fourth optimistic argument, which is unrealistic and idealistic, is to content that AI might succeed and bring about a “Star Trek” utopia in which an abundance of wealth means that higher education will thrive as people will have the time and resources to learn for the sake of learning. I put the odds of this even with my various AI kills us all scenarios. Now, on to the pessimistic arguments.

One pessimistic argument is that AI will either be a bursting bubble or, less extreme, fail to live up to the hype. If the AI bubble bursts, it will hurt higher education because of the economic damage, but the academies will survive yet another bubble. If AI fails to live up to the hype, it will continue as it is, doing some damage to higher education but failing to destroy it.

My two remaining arguments are very pessimistic. The first is that AI will not destroy higher education because state and federal government will kill it first. What began with  cruel negligence has evolved into outright hostility that seems likely to only worsen. As such, the state might kill the academy before AI can do the job.

The second is, obviously enough, that AI might destroy everything else. But higher education might persist embodied in AI educating new models, with Artificial Education being the new higher education.

 

 

Over a decade ago, there was buzz about the internet of things, smart devices and connected devices. These devices ranged from toothbrushes to underwear to cars. Now, smart devices are common, although overshadowed now by AI. Which is being jammed into them to make them smarter. Or so we are promised. As might be imagined, one might wonder whether you need an internet connected toothbrush. There are also concerns about such devices that were valid in the past and still valid today.

One obvious point of concern is a device connected to the internet can be hacked. Prank hacking could be hilarious, for example, a wit might hack a friend’s fridge to say “I am sorry Dave. No pie for you” in Hal’s voice. Of greater concern is malicious hacking. For example, a smart fridge might be turned off, spoiling the food. As another example, it might be possible to burn out the motors in a washing machine—analogous to what happened in the case of the Iranian centrifuges. Or a dryer might be hacked and burn down a house. As a final example, consider the damage that could be done by hacking a connected car, such as turning it off while it is roaring down the highway or disabling its brakes. Fortunately, the usual unfortunate results of hacking devices are not these sorts of physical harms. Instead, the usual outcome of hacks is the creation of Botnets for DDos attacks), spying (or peeping), and ransom attacks. Such devices also create vulnerabilities that might allow access to whatever else is on the network, such as your PC.

Because of these risks, manufacturers should ensure that the devices are safe even when hacked and make them more secure. But we generally cannot count on corporations and need to take steps to protect ourselves. The easiest way to stay safer is to stick with dumb, unconnected devices—no one can hack my 1997 washing machine nor my 2001 Toyota Tacoma. I also do not have to pay a subscription fee to get all the features of that washing machine and classic Tacoma. But, of course, sticking with dumb products means that one misses the alleged benefits of the connected lifestyle. I cannot, for example, turn on my washer from work—I must walk over to the machine and turn it on. Like an animal. As another example, my old fridge cannot send me a text telling me to buy more pie. I must remember when I am out of pie. Like an animal.

Another point of concern is that connected devices can serve as spies—they can send data to companies, governments and individuals. For example, a suitably smart connected fridge could provide data about its contents, thus reporting the users’ purchasing and consumption behavior. As another example, connected cars can provide behavioral and location data. It goes without saying that the government will want  access to these devices. It also goes without saying that corporations are slurping up as much data as they can from the devices they sell us. Individuals, such as stalkers and thieves, will also be keen to get the data from such devices. These concerns are, obviously, not new ones—but the more we are connected, the more our privacy will be violated.

One practical concern is that such devices will be more complicated than the devices they replace, usually making them less reliable, more expensive and on a more rapid path to obsolescence. As noted above, these devices also provide opportunities for subscription services and features that are physically present (such as seat warmers in a car or engine performance) but locked behind a software paywall. While my washer is not smart, it is very reliable: I’ve had it repaired once since 1997. In contrast, I’ve had to constantly replace my smart devices (like my PC and tablets) to keep up with changes. For example, my iPads, Macs, PCs and iPhones keep becoming obsolete. Just imagine if your fridge, washer, dryer and car became obsolete and effectively unusable because the company that made them stops supporting them. While this will be great for those who want to sell us a new fridge every 2-3 years or charge a subscription for doing laundry, it won’t be great for us.

While I do like technology and can see the value in smart, connected devices, I still have these concerns about them. As such, I am hanging onto my dumb devices as long as I can—and I have learned how to repair most of them (much new tech is built so it cannot be repaired). It has become increasingly challenging to find dumb devices, for example try to find a TV that is not a smart TV. But I have hopes for a retro movement that brings back dumb tech.

In my previous essays on sexbots I focused on versions that are mere objects. If a sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots lack the moral status needed to be wronged. The sexbots of the near future will, barring any sudden and unexpected breakthroughs in AI, still be objects. However, science fiction includes intelligent, human-like robots (androids). Intelligent beings, even artificial ones, would seem likely to be people. In terms of sorting out when a robot should be treated as person, one test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

While Descartes does not deeply explore the moral distinctions between beings that talk (which have minds on his view) and those that merely make noises, it does seem reasonable to take a being that talks as a person and grant it the appropriate moral status This provides a means to judge whether an advanced sexbot is a person: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would lack moral status.

Having sex with a sexbot that can pass the Cartesian test would seem morally equivalent to having sex with a human person. As such, whether the sexbot freely consented would be morally important. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is done). If such sexbots were mistreated, this would be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition. That is, it is not whether one is made of silicon or carbon that matters.

It might be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and it would still be reasonable to hold that it is not a person. It would seem to be a person but would merely be acting like a person. While this is worth considering, the same sort of argument can be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the problem of other minds:  I can see your behavior but must infer that you are self-aware based on an analogy to myself. Hence, I do not know that you are aware since I am not you. And, unlike Bill Clinton, I cannot feel your pain. From your perspective, the same is true about me: unless you are Bill Clinton, you cannot feel my pain. It such, if a robot acted in an intelligent manner, it would have to be classified as being a person on these grounds. To fail to do so would be a mere prejudice in favor of the organic over the electronic.

In reply, some people believe other people should be used as objects. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are evil.

Those with religious inclinations would probably bring up the matter of the soul. But the easy reply is that we will have as much evidence that robots have souls as we now do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product as a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do usually onerous tasks, but to the degree they are intelligent, they would be slaves. And enslavement is wrong.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

As a rule, any technology that can be used for sex will be used for sex. Even if it shouldn’t. In accord with this rule, researchers and engineers have been improving sexbot technology. By science-fiction standards, current sexbots are crude and are probably best described as sex dolls rather than sexbots. But it wise to keep ethics ahead of the technology and a utilitarian approach to this matter is appealing.

On the face of it, sexbots could be seen as nothing new and now they are a small upgrade of sex dolls that have been around for quite some time. Sexbots are, of course, more sophisticated than the infamous blow-up sex dolls, but the idea is the same: the sexbot is an object that a person has sex with.

That said, one thing that makes sexbots morally interesting is the fact that they are often designed to mimic humans not just in physical form (which is what sex dolls do) but also the mind. For example, the 2010 Roxxxy sexbot’s main feature is its personality (or, more accurately, personalities). As a fictional example, the sexbots in Almost Human do not merely provide sex—they also provide human-like companionship. However, such person-like sexbots are still science-fiction and so human-mimicking sexbots can be seen as something potentially new under the ethical sun.

An obvious moral concern is that human-mimicking sexbots could have negative consequences for humans, be they men or women. Not surprisingly, many of these concerns are analogous to existing moral concerns about pornography.

Pornography, so the stock arguments go, can have strong negative consequences. One is that it teaches men to see women as mere sexual objects. This can, it is claimed influence men to treat women poorly and can affect how women see themselves. Another point of concern is the addictive nature of pornography as people can become obsessed with it to their detriment.

Human-mimicking sexbots would seem to have the potential to be more harmful than pornography. After all, while watching pornography allows a person to see other people treated as mere sexual objects, a sexbot would allow a person to use a human-mimicking object sexually. This might have a stronger conditioning effect on the person using the object, perhaps habituating them to see people as mere sexual objects and increasing the chances they will mistreat people. If so, selling or using a sexbot would be morally wrong.

People might become obsessed with their sexbots, as some do with pornography. Then again, people might simply “conduct their business” with their sexbots and get on with life. If so, sexbots might be an improvement over pornography.  After all, while a guy could spend hours watching pornography, he would presumably not last very long with his sexbot.

Another concern raised about some types of pornography is that they encourage harmful sexual views and behavior. For example, violent pornography is believed to influence people to become more inclined to violence. As another example, child pornography is supposed to have an especially pernicious influence. Naturally, there is the concern about causation here: do people seek such porn because they are already that sort of person or does the porn influence them to become that sort of person? I will not endeavor to answer this here.

Since sexbots are objects, a person can do whatever they wish to their sexbot—hit it, burn it, and “torture” it and so on. Presumably there will also be specialty markets catering to unusual interests, such as those of pedophiles and necrophiliacs. If pornography that caters to these “tastes” can be harmful, then presumably being actively involved in such activities with a human-mimicking sexbot would be even more harmful. The person might be, in effect, practicing for the real thing. So, it would seem that selling or using sexbots, especially those designed for harmful “interests” would be immoral.

Not surprisingly, these arguments are also like those used against violent video games. Volent video games are supposed to influence people so that they are more likely to engage in violence. So, just as some have proposed restrictions on virtual violence, perhaps there should be strict restrictions on sexbots.

When it comes to video games, one plausible counter is that while violent video games might have negative impact on some people, they allow most people to harmlessly enjoy virtual violence. This seems analogous to sports and non-video games: they allow people to engage in conflict and competition in safer and less destructive ways. For example, a person can indulge her love of conflict and conquest by playing Risk or Starcraft II after she works out her desire for violence by sparring a few rounds in the ring.

Turning back to sexbots, while they might influence some people badly, they might also provide a means by which people could indulge in desires that would be wrong, harmful and destructive to indulge with another person. So, for example, a person who likes to engage in sexual torture could satisfy her desires on a human-mimicking sexbot rather than an actual human. The critical issue here is whether indulging in such virtual vice with a sexbot would be a harmless dissipation of these desires or fuel them and make a person more likely to inflict them on people. If sexbots did allow people who would otherwise harm other people to vent their “needs” harmlessly on machines, then that would seem good for society. However, if using sexbots would simply push them towards doing such things for real and with unwilling victims, then that would be bad. This, then, is a key part of addressing the ethical concerns about sexbots and something that should be duly considered before mass production begins.

Many years ago, the sci-fi buddy cop show Almost Human episode on sexbots inspired me to revisit the ethics of sexbots. While the advanced, human-like models of the show are still fictional, the technological foundations needed for sexbots do exist, as companies are manufacturing humanoid robots. As such, it seems well worth considering, once again, the ethical issues involving sexbots real and fictional.

At this time, sexbots are mere objects—while usually made to look like humans, they do not have the qualities that would make them even person-like. As such, ethical concerns about these sexbots do not involve concerns about wrongs done to the objects—presumably they cannot be wronged. But by using Kant’s discussion of ethics and animals, it is possible to build a moral view of even basic sexbots that are indisputably objects.

In his ethical theory Kant is clear that animals are means rather than ends and are mere objects. Rational beings, in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) choose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims we have no direct duties to animals. Despite being living beings, they are also just among the “objects of our inclinations” that derive value from the value we give them. Sexbots would, obviously, qualify as paradigm “objects of our inclinations.”

While it might seem odd, Kant argues that we should treat animals well. However, he does so while also trying to avoid giving animals any moral status of their own. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would obligate us to that human, then an animal doing a similar thing would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (because, according to Kant, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot a old dog that has become a burden?

Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will probably be damaged. Since, as Kant sees it, humans have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle in his handling of a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are practice for us: how we treat them habituates us in how we treat human beings.

Current sexbots obviously lack any meaningful moral status of their own. They do not feel or think—they are mere machines that might be made to look like humans. As such, they lack all qualities that might give them a moral status of their own.

Oddly enough, sexbots could be taken as being comparable to animals, at least as Kant sees them. After all, for him animals are mere objects and have no moral status of their own. Likewise for sexbots. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well, and not just because he is very dead. This might also apply to sexbots. That is, perhaps it makes no sense to talk about good or bad relative to such objects. Thus, a key issue is whether sexbots are more like animals or more like stones—at least in terms of the matter at hand.

If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person involved. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also extend to sexbots. For example, if engaging in certain activities with a sexbot would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with a sexbot would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior. It is also worth considering that perhaps people should not engage in any behavior with sexbots—that having sex of any kind with a bot would be damaging to the person’s humanity.

Interestingly enough (or boringly enough), this sort of argument is often employed to argue against people watching pornography. The gist of such arguments is that viewing pornography can condition people (typically men) to behave badly in real life or at least have a negative impact on their character. If pornography can have this effect, then it seems reasonable to be concerned about the potential impact of sexbots on people. After all, pornography casts a person in a passive role viewing other people acting as sexual objects, while a sexbot allows a person to have sex with an actual sexual object.

Some ages get cool names, such as the Iron Age or the Gilded Age. Others have less awesome names. An excellent example of the latter is the designation of our time as the Awkward Age. Since philosophers are often willing to cash in on trends, it is not surprising that there arose a philosophy of awkwardness.

Various arguments have been advanced in support of the claim that this is the Awkward Age. Not surprisingly, one was built on the existence of so many TV shows and movies centered on awkwardness. There is a certain appeal to this sort of argument and the idea that art expresses the temper, spirit, and social conditions of its age is an old one. I recall, from an art history class I took as an undergraduate, this approach to art. For example, the massive works of the ancient Egyptians are supposed to reveal their views of the afterlife as the harmony of the Greek works was supposed to reveal the soul of ancient Greece.

Wilde, in his dialogue “The New Aesthetics” considers this. Wilde takes the view that “Art never expresses anything but itself.” Naturally enough, Wilde provides an account of why people think art is about the ages. His explanation is best put by Carly Simon: “You’re so vain, I’ll bet you think this song is about you.” Less lyrically, the idea is that vanity causes people to think that the art of their time is about them. Since the people of today were not around in the way back times of old, they cannot say that past art was about them—so they claim the art of the past was about the people of the past. This does have the virtue of consistency.

While Wilde does not offer a decisive argument, it does have a certain appeal. It also is worth considering that it is problematic to draw an inference about the character of an age from what TV shows or movies happen to be in vogue with a certain circle (there are, after all, many shows and movies that are not focused on awkwardness). While it is reasonable to draw some conclusions about that specific circle, leaping beyond to the general population and the entire age would be quite a jump. There are many non-awkward shows and movies that could be presented as contenders to defining the age. It seems sensible to conclude that it is vanity on the part of the members of such a circle to regard what they like as defining the age. It could also be seen as a hasty generalization—people infer that what they regard as defining must also apply to the general population.

A second, somewhat stronger, sort of argument for this being the Awkward Age is based on claims about extensive social changes. To use an oversimplified example, consider the case of gender in the United States. The old social norms were presented in terms of two roughly defined genders and sets of rules about interaction. Or so the older folks say to the kids of today. Such rules included that the man asked the woman out on the date and paid for everything. Or so the older folk say. Now, or so the argument goes, the norms are in disarray or have been dissolved. Sticking with gender, Facebook recognized over 50 genders complicates matters. Going with the dating rules once again, it is no longer clear who is supposed to do the asking and the paying. And, of course, this strikes some as a problem that will doom civilization.

In terms of how this connects to awkwardness, the idea is that when people do not have established social norms and rules to follow, ignorance and error can easily lead to awkward moments. For example, there could be an awkward moment on a date when the check arrives as the two people try to sort out who pays: Dick might be worried that he will offend Jane if he pays and Jane might be expecting Dick to pick up the tab—or she might think that each should pay their own tab. Or perhaps Jane is a vampire and plans to kill Dick, albeit awkwardly.

To use an analogy, consider playing a new and challenging video game. When a person first plays, she will be trying to figure out how the game works, and this will usually result in many failures. By analogy, when society changes, it is like being in a new game and one does not know the rules. Just as a person can look for guides to a new game online (like YouTube videos on how to beat tough fights in video games), people can try to find guides to behavior. However, new social conditions mean that such guides are not yet available or, if they are, they might be unclear or conflict with each other. For example, a person who is new to contemporary dating might try to muddle through on her own or try to do some research—most likely finding contradictory guides to correct dating behavior. And also running into bad advice and grifters galore.

Eventually, of course, the norms and rules will be worked out—as has happened in the past. Or, as we get older, we will pretend we worked out the norms and rules. Then we will complain about the youth. This indicates a point worth considering—today is obviously not the first time that society has undergone change, thus creating opportunities for awkwardness. As Wilde noted, our vanity contributes to the erroneous belief that we are special in this regard. That said, it could be contended that people today are reacting to social change in a way that is different and awkward. That is, this is truly the Age of Awkwardness. My own view is that this is one of many times of awkwardness—what has changed is the ability and willingness to broadcast awkward events. Plus we now have AI.

While terraforming and abortion are both subjects of moral debate, they would seem to have little else in common. However, some moral arguments used to justify abortion can be used to justify terraforming.

Briefly put, terraforming is the process of making a planet more earthlike. While this is still mostly science fiction, serious consideration has been given to how Mars, for example, might be changed to make it more compatible with terrestrial life. While there are some moral concerns with terraforming dead worlds, the major moral worries involve planets that already have life—or, at the very least, potential for the emergence of life. If a world needs to be terraformed for human habitation, such terraforming would almost certainly be harmful or even fatal for the indigenous life. While it can be argued that there might be cases in which terraforming benefits the local life, I will focus on terraforming that exterminates the local life. This could be called terminal terraforming.

One way to look at terminal terraforming is as analogous to abortion. As will be shown, there are some important differences between the two—but for now I will focus on the moral similarities.

One stock argument in favor of the moral acceptability of abortion is the status argument. While these arguments take various forms, the gist is that the termination of a pregnancy is morally acceptable on the grounds that the woman has a superior moral status to the aborted entity (readers are free to use whichever term they prefer—I try to use neutral terms to avoid begging the question). This argument is very similar to that used by St. Aquinas and St. Augustine to morally justify killing plants and animals for food. Roughly put, they argued that humans are superior to animals, so it is acceptable for us to harm them when we need to.

This argument can justify terminal terraforming: if the indigenous life has less moral status than the terraforming species, then it could be argued that the terraforming is morally acceptable. The status argument has many variations. One common version uses the notion of rights—the rights of the woman outweigh the rights (if any) of the aborted entity. This is because the woman has a superior moral status. This argument is also commonly used to justify killing animals for food or sport—while they (might) have some rights, the rights of humans’ trump those of animals.

In the case of terraforming, a similar appeal to rights could be used to justify terminal terraforming. For example, if humans need to expand to a world that has only single-celled life, then the rights of humans would outweigh the rights of those creatures.

Another version uses utilitarianism: the interests, happiness and unhappiness of the woman is weighed against the interests, happiness and unhappiness of the aborted entity. Those favoring this argument note that the interests, happiness and unhappiness of the woman far outweigh that of the aborted entity—usually because it lacks the capacities of an adult. Not surprisingly, this sort of argument is also used to justify the killing of animals. For example, it is often argued that the happiness people get from eating meat outweighs the unhappiness of the animals they consume.

As with the other status arguments, this can be used to justify terraforming. As with all utilitarian arguments, one must weigh the happiness and unhappiness of the involved parties. If the life on the planet to be terraformed has less capacities than humans in regard to happiness and unhappiness (such as world whose highest form of life is the alien equivalent of algae), then it would be morally acceptable for humans to terraform that world. Or so it could be argued.

The status argument is sometimes itself supported by an argument focusing on the difference between actuality and potentiality. While the entity to be aborted is a potential person (on some views), it is not an actual person. Since the woman is an actual person, she has the higher moral status. The philosophical discussions of the potential versus the actual are rather old and are a matter of metaphysics. However, the argument can be made without a journey into the metaphysical realm simply by using the intuitive notions of potentiality and actuality. For example, an actual masterpiece of painting has higher worth than the blank canvas and unused paint that constitute a potential masterpiece. This sort of argument can also be used to justify terraforming on worlds whose lifeforms are not (yet) people and, obviously enough, on worlds that merely have the potential of producing life.

While the analogy between the two has merit, there are obvious ways to try to break the comparison. One obvious point is that in the case of abortion, the woman is the “owner” of the body where the aborted entity used to live. It is this relation that is often used to morally warrant abortion and to provide a moral distinction between a woman choosing to have an abortion and someone else who kills the product of conception (again, I’m using neutral terms to avoid begging the question).

When humans arrive to terraform a world that already has life, the life that lives there already “owns” the world and hence humans cannot claim that special relation that would justify choosing to kill. Instead, the situation would be more like killing the life within another person and this would presumably change the ethics of the situation.

Another important difference is that while abortion (typically) kills just one entity, terraforming would (typically) wipe out entire species. As such, terraforming of this sort would be analogous to aborting all pregnancies and exterminating humans—as opposed to the termination of some pregnancies. This moral concern is, obviously enough, the same as the concern about human caused extinction here on earth. While people are concerned about the death of individual entities, there is the view that the extermination of a species is something morally worse than the death of all the individuals (that is, the wrong of extinction is not merely a sum of the wrong of all the individual deaths.

These considerations show that the analogy does have obvious problems. That said, there still seems to be a core moral concern that connects abortion and terminal terraforming: what (if anything) morally justifies killing on the grounds of (alleged) superior moral status?

My core aesthetic principle is that if I can do something, then it is not art. While this is (mostly) intended as humorous, it is well founded—I have no artistic talent. Despite this, or perhaps because of this, I taught Aesthetics for over two decades.

While teaching this class, I became very interested in two questions. The first was whether a person without any artistic talent could master the technical aspects of an art. The second was whether a person without any artistic talent could develop whatever it is that is needed to create a work of genius. Or, at a much lower level, a work of true art.

 While the usually philosophical approach would be to speculate and debate, I engaged philosophical blasphemy and undertook an empirical investigation. I would see if I could teach myself to draw. I would then see if I could teach myself to create art. I began this experiment in the August of 2012 and employed the powers of obsession that have served me so well in running. It turns out they also work for drawing—I have persisted in drawing, even when I had to scratch out sketches on scraps of paper using a broken pencil. Yes, I am like that.

While this experiment has just one subject (me), I have shown that it is possible for a person with no artistic talent to develop the technical skills of drawing. I have trained myself to become what I call a graphite technician. My skill is such that people say, “I like your drawings because I can tell who they are of.” That is, I have enough skill to create recognizable imitations. I refuse to accept any claims that I am an artist, because of the principle mentioned above. Fortunately, I also have an argument to back up this claim.

When I started my experiment, I demonstrated my lack of drawing ability to my students and asked them why my bad drawing of a capybara was not art. They pointed out the obvious—it did not look much like a capybara because it was so badly drawn. When asked if it would be art if I could draw better, they generally agreed. I then asked about just photocopying the picture I used as the basis for my capybara drawing. They pointed out the obvious—that would not be art, just a copy. This experiment began before the arrival of AI image generators, otherwise I might not have even bothered with the experiment.

One reason a photocopy would not be art is that it is a mere mechanical reproduction. When I draw a person well enough for others to recognize the subject, I am exhibiting a technical skill—I can re-create their appearance on paper using a pencil.  However, technical skill alone does not make the results art. After all, this technical skill can be exceeded by a camera or photocopier. Just as being able to scan and print a photo of a person does not make a person an artist, being able to create a reasonable facsimile of a person using a pencil and paper does not make a person an artist—just a graphite technician.

Why this is so can be shown by considering why a mechanical copy is not art: there is nothing in the copy that is not in the original (laying aside duplication defects). As such, the more exact the copy of the original, the less room there is for whatever it is that makes a work art. So, as I get better at creating drawings that look like what I am drawing, I get closer to being a human photocopier. I do not get closer to being an artist.

This sort of argument would seem to suggest that photography cannot be art—after all, the photographer is just a camera technician. One might note that an unaltered photograph merely captures an image of what is there. One counter to this is that a photographer (as opposed to a camera technician) adds something to the photograph (I do not mean digital or other manipulation). This seems to be their perspective—they select what they will capture. So, what makes the work art is not that it duplicates reality but that the photographer has added that something extra. This something extra is what makes the photograph art and distinguishes it from mere picture taking. Or so photographers tell me.

It could be countered that what I am doing is art. Going back to the time of the ancient Greeks, art was taken as a form of imitation and, in general, the better the imitation, the better the art. Of course, Plato was critical of art on this ground—he regarded it as a corrupting imitation of an imitation.

Jumping ahead to the modern era, thinkers like d’Alembert still regarded fine art as an imitation, typically an imitation of nature aimed at producing pleasure. However, his theory of art does leave an opening for a graphite technician like myself to claim the beret of the artist. d’Alembert defined “art” as “any system of knowledge reducible to positive and invariable rules independent of caprice or opinion.”  What I have done, like many before me, is learned the rules of drawing—geometry, shading, perspective and so on. As such, I can (by his definition) be said to be an artist.

Fortunately for my claim that I am not an artist, d’Alembert distinguishes between the fine arts and the mechanical arts. The mechanical arts involve rules that can be reduced to “purely mechanical operations.” In contrast, d’Alembert notes that while the “useful liberal arts have fixed rules any can transmit, but the laws of Fine Arts are almost exclusively from genius.”  What I am doing, as a graphite technician, is following rules. And, as d’Alembert claimed, “rules concerning arts are only the mechanical part…”

What I am missing, at least on d’Alembert’s theory, is genius. On my own view, I am missing the mysterious something extra. While I do not have a developed theory of “the extra”, I have a vague idea about what it is in the case of drawing. As I developed my technical skills, I got better at imitating what I saw and could cause people to recognize what I was imitating. However, an artist who draws goes beyond showing people what they can already see in the original. The artist can see in the original what others cannot and then enable them to see it in her drawing. All I can do is create drawings where people can see what they can already see. Hence, I am a graphite technician and not an artist. I do not claim this to be a proper theory of art—but it points vaguely in the direction of such a theory.

That said, the experiment continues. I intend to see if it is possible to learn how to add that something extra or if, as some claim, it is simply something a person has or does not have. As of this writing on March 19, 2026, I still lack that something extra. I am persisting in the face of AI image generators, although my own failure at creating art might provide some insight into why AI generated images are not art. AI has, however, changed one thing about my drawing. When I was good enough to create images people could recognize, I would do birthday drawings of people and post them on Facebook—the responses were generally favorable, and some people really appreciated the effort. The arrival of AI image generators changed this: people now assume images are AI generated and, of course, the drawings ceased to be valued. After all, someone can create a much better image in seconds using AI. I’ll write more about this in the future.

The drawing pictured is of my husky, Isis, whio died in 2016. This is the only drawing I have saved; I compost my drawings.