A basic moral challenge is sorting out how people should be treated. This is often formulated in terms of obligations to others, and the usual question is “what, if anything, do we owe other people?” While some would like to exclude economics from ethics, the burden of proof rests on those claiming the realm of money deserves exemption from ethics. While this could be done, it will be assumed that economic matters fall under morality. But there are many approaches to morality.

While I use virtue theory as my personal ethics, I find aspects of Kant’s ethical theory appealing, so let us see what Kant’s theory might entail for economic justice. In terms of how we should treat others, Kant takes as foundational that “rational nature exists as an end in itself.”

Kant supports his view by asserting that “a man necessarily conceives his own existence as such” and this applies to all rational beings. A rational being sees itself as being an end, rather than a thing to be used as a means to an end.  In my own case, I see myself as a person who is an end and not as a thing that exists to serve the ends of others. But some other people might see me differently.

Of course, the fact that I see myself as an end would not seem to require that I extend this to other rational beings (that is, other people). After all, I could see myself as an end and regard others as means to my ends—to be used for my profit as, for example, underpaid workers.

However, Kant claims that I must regard other rational beings as ends as well. The reason is straightforward and is based on an appeal to consistency: if I am an end rather than a means because I am a rational being, then consistency requires I accept that other rational beings are ends. After all, if being a rational being makes me an end, it would do the same for others. Naturally, it could be argued that there is a relevant difference between myself and other rational beings that would warrant me treating them as means and not as ends. People have, obviously enough, long endeavored to justify treating other people as things. Slavery in America provides an example of this, as do many modern economic practices. However, there seems to be no principled way to insist on my own status as an end while denying the same to other rational beings. Which, one might suspect, is why some people wish to claim that other people are not rational beings. Or are otherwise inferior in some way that makes them suitable as means.

From his view of rational nature, Kant derives his practical imperative: “so act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.” This imperative does not mean that I must never treat a person as a means—that is allowed, provided I do not treat the person as a means only. So, for example, I would be morally forbidden from using people as mere means of revenue. I would, however, not be forbidden from having someone ring up my purchases at the grocery store—provided I treated the person as a person and not a mere means. One obvious challenge is sorting out what it is to treat a person as an end as opposed to just a means to an end. Some cases are obvious, such as enslaving another person. Other cases are more complex, such as hiring a person as a worker.

Many economic relationships seem to clearly violate Kant’s imperative in that they treat people as mere means and not at all as ends. To use an obvious example, if an employer treats her employees merely as means to profit and does not treat them as ends in themselves, then she is acting immorally by Kant’s standard. After all, being an employee does not rob a person of personhood.

One obvious reply is to question my starting assumption, namely that economics is not exempt from ethics. It could be argued that the relationship between employer and employee is purely economic and only economic considerations matter. That is, the workers are to be regarded as means to profit and treated in accord with this—even if doing so means treating them as things rather than people. The challenge is to show that the economic realm grants a special exemption to ethics. Of course, if it does this, then the exemption would be a general one. So, for example, people who decided to take money from the rich at gunpoint would be exempt from ethics as well. After all, if everyone is a means in economics, then the rich are just as much a means as employees and if economic coercion against people is acceptable, then so too is coercion via firearms. As always, the challenge the rich face in ethics is justifying their economic misdeeds while simultaneously condemning similar actions by the poor.

Another reply is to contend that might makes right. That is, the employer has the power and owes nothing to the employees beyond what they can force him to provide. This would make economics like the state of nature—where, as Hobbes said, “profit is the measure of right.” Of course, this leads to the same problem as the previous reply: if economics is a matter of might making right, then workers have the same right to use might against employers and the poor to use it against the rich.

One reason sometimes given to expand health care coverage is that if someone has health insurance, then they are less likely to use the emergency room for treatment. One reason for this is that someone with health insurance will be more likely to use primary care and less likely to need emergency room treatment. It also makes sense that a person with insurance would get more preventative care and be less likely to need a trip to the emergency room.

On the face of it, reducing emergency room visits would be good. One reason is that emergency room care is expensive and reducing it would save money. Another reason is that the emergency room should be for emergencies—reducing the number of visits can help free up resources and reduce waiting times.

So, extending insurance coverage would reduce emergency room visits and this is good. However, extending insurance might increase emergency room visits. In one seemingly credible study, insurance coverage resulted in more emergency room visits.

One obvious explanation is that the insured would be more likely to use medical services for the same reason that insured motorists are more likely to use the service of mechanics: they are more likely to be able to afford to pay the bills.

On the face of it, this does not seem bad. After all, if people can afford to go to the emergency room because they have insurance, that is better than having people suffer because they lack the means to pay. However, what is most interesting about the study is that the expansion of Medicaid coverage increased emergency room visits for treatments more suitable for a primary care environment. The increase in emergency use was significant—about 40%. The study was large enough that this is statistically significant.

Because of this, it is worth considering the impact of expanding coverage on emergency rooms. Especially if it is argued that expanding coverage would reduce costs by reducing emergency room visits.

One possibility is that the results from the Medicaid study would hold true in general, so that expansions of health care coverage would result in more emergency room visits. If an expansion of coverage results in significant increase in emergency room visits, this would not help reduce health care costs if people go to the more expensive emergency room rather than seeking primary care.

But an insurance expansion might not cause significantly more non-necessary emergency room visits. One reason is that private insurance companies seem to try to deter emergency room visits by imposing higher payments for patients. In contrast, Medicaid did not impose this higher cost. Thus, those with private insurance would tend to have a financial incentive to avoid the emergency room while those on Medicaid would not, unless there was an increased cost for the Medicaid patient. While it does seem wrong to impose a penalty for going to the emergency room, one method to channel patients towards non-emergency room treatment is to impose a financial penalty for emergency room visit for services that can be provided by primary care facilities. One moral concern with imposing such penalties is that some forms of care are only available through emergency rooms. For example, when I had to get rabies shots in 2023, the only option was the emergency room. But it could be replied that such treatments are unusual, hence the penalty would not affect many people.

People might use emergency rooms instead of primary care because they do not know their options. If so, if more people were better educated about medical options, they would be more likely to choose options other than the emergency room when they did not need emergency room services. Given that the emergency room is stressful and involves a long wait (especially for non-emergencies) people would probably elect for primary care when they know they have that option.  This is not to say education will be a cure-all, but it is likely to help reduce unnecessary emergency room visits. Which is certainly a worthwhile objective.

While running through Florida State University way back in December 2013, I noticed that the campus had been plastered with signs announcing that on January 1, 2014 the entire campus would be tobacco free. I was impressed by the extent of the plastering—there were plastic signs adhered to the sidewalks and many surfaces to ensure that all knew of the decree. Naturally, one of the people I saw placing the signs was smoking while doing so.

While running sometimes cause flashbacks, those signs flashed me back to my freshman English class at Marietta College.  In one essay, I argued for anti-smoking proposals, including some that were draconian. Apparently possessing the power of prophecy, I argued for area bans on smoking. My motivation was somewhat selfish: I hate the smell of tobacco smoke, and it causes my eyelids to swell and trouble breathing. As such, like a properly political person, I thought it good and just to recast the world according to my desires and beliefs.

I thought the paper was well argued and rational. However, the professor (an avowed liberal) assigned it a grade of 0.62. She also put a frowning face on it. And she called me a fascist. Interestingly, almost everything I proposed in the paper has come to pass (the campus wide ban being the latest). On the one hand, I do feel vindicated—if only because of my prophetic powers. On the other hand, I wobbled a bit between anarchism and authoritarianism in those days and that paper was clearly written during an authoritarian swing. Back in 2014 I reconsidered the ethics of the smoking ban and now, that Florida campuses have been smoke free for 12 years, I decided to revisit this issue.

While there are various ways to warrant area bans on certain behavior, three common justifications include claiming that the behavior is unpleasant, offensive or harmful. Or some combination of the three. In terms of justification, one option is to ban behavior based on its impact on the rights of others. That is, if the behavior is unpleasant, offensive or harmful to others it violates their rights to not be exposed to such behavior.

While I have no desire to observe behavior that is unpleasant, I do not have a right to not be exposed to the merely unpleasant. After all, what is unpleasant is subjective and area bans on the merely unpleasant would result in absurdity. For example, I would find someone wearing a vomit green sweater with neon pink goats unpleasant to view, but an area ban on unpleasant fashion would be absurd. The merely unpleasant does not impose enough on others to warrant banning it (providing that the unpleasantry does not cross over into harm). So, the mere fact that many people find smoking unpleasant does not warrant an area ban on it.

While I have no desire to be exposed to behavior I find offensive, I do not have a right to not be exposed to what is merely offensive. Even the very offensive. While what is offensive might be less subjective than the unpleasant, it still is subjective. As such, as with the merely unpleasant, an area ban on merely offensive behavior would lead to absurdity. For example, if the neon goats on the sweater mentioned above spelled out the words “philosophers are goat f@ckers”, I would find the sweater unpleasant and offensive. However, the merely offensive does not impose enough on my rights to warrant imposing on the rights of the offender. Naturally, offensive behavior can cross over into a violation of rights and that would warrant imposing on the offender. For example, if the sweater wearer insisted on following me and screaming “goat f@cker” at me while I am trying to teach, then that would go from being merely offensive to harassment. Thus, the fact that many people find smoking offensive would not warrant an area ban on smoking. Interestingly, it would also not warrant bans on public nudity,  at least those based on something being offensive.

Like most people, I have no desire to be harmed by the behavior of others and I think I have a right to not be harmed (although there are cases in which I can be justly harmed). For those who prefer not to talk of rights, I am also fine with the idea that it would just be wrong to harm me (at least in most cases). As such, it should be no surprise that I think area bans on behavior that harms others are acceptable. The obvious moral grounds would be Mill’s argument about liberty: what concerns only me and does not harm others is my own business and not their business. But actions that harm others become the business of those that are harmed.

While the basic idea that it is acceptable to limit behavior that harms others is appealing, one challenge is sorting out the sort of harms that warrants imposing on others. Going back to offensive behavior, it could be claimed that offensive behavior does cause harm. For example, someone might believe that his children would be harmed if they saw an unmarried couple kissing in public and thus claim that this should be banned from all public areas. As another example, a person might contend that seeing people catching fish would damage her emotionally because of the suffering of the fish and thus fishing should be banned from public areas. While these two examples might seem a bit silly, there are grey areas between the offensive and the clearly harmful.

Fortunately, the situation with smoking is clear cut. Tobacco smoke is physically harmful to those who breathe it in (whether they are smoking or not). As such, when someone is smoking in a public area, they are imposing an unchosen health risk on everyone else in the area of effect. Since the area is public, smokers have no right to do this. To use analogy, while a person has a right to wear the “goat f@cker” sweater mentioned above, they do not have a right to wear one that also constantly sprays poison. To use a less silly analogy, a person in a public area does not have the right to spit on people around them. While they could avoid this by staying away from her, she has no right to “control” the space around her with something that can harm others (spit can transmit disease). As such, it is morally acceptable to impose an area ban on smoking.

But behavior that does not harm others should not be subject to such bans. For example, drinking alcohol in public. Provided that the person is not engaging in otherwise harmful behavior, there seems to be no compelling moral reason to impose such a ban. After all, drinking a beer near people in public causes them no harm. Likewise, campus dress codes also lack a moral justification—provided that the attire does not inflict harm (like the imaginary poison spraying goat sweater). Merely being offensive or even distracting does not seem enough to warrant an area ban on moral grounds.

Pundits and politicians on the right consistently demonize the poor. For example, Fox News seems to delight in a narrative of the wicked poor destroying America. It is worth considering why the poor are demonized.

One ironic foundation for this is religion. While Jesus regards the poor as blessed and warns of the dangers of idolatry, there is a version of Christianity that sees poverty as a sign of damnation and wealth as an indicator of salvation. As some have pointed out, this view is a perversion of Christianity. Not surprisingly, some people have been criticized by pundits for heeding what Jesus said.

Another reason is that demonizing the poor allows pundits and politicians to redirect anger so that the have-less are angry at the have-nots, rather than those who have almost everything. This is classic scapegoating: the wicked poor are blamed for many of the woes besetting America. The irony is that the poor and powerless are cast as a threat to the rich and powerful.

The approach taken towards the poor follows a classic model used throughout history that involves presenting two distinct narratives about the target of hatred The first is to create a narrative which presents them as subhuman, wicked, inferior and defective. In the case of the poor, the narrative is that they are stupid, lazy, drug-users, criminals, frauds, mockers and so on. This narrative is used to create contempt and hatred to dehumanize them. This makes it much easier to get people to think it is morally permissible (even laudable) to treat the poor poorly.

The second narrative is to cast the poor as incredibly dangerous. While they have been cast as inferior by the first narrative, the second presents them as a dire threat. The narrative is that the wicked poor are destroying America by being “takers” from the “makers.” One obvious challenge is crafting a plausible narrative in which the poor and seemingly powerless can somehow destroy the rich and powerful. One solution has been to claim that another group, such as the Democrats or the Jews as being both very powerful (thus able to destroy America) yet someone in service to the poor.

On the face of it, a little reflection should expose the absurdity of this narrative. The poor are obviously poor and lack power. After all, if they had power, they would not remain poor. As such, the idea that the poor and powerless have the power to destroy America is absurd. True, the poor could rise up in arms and engage in class warfare in the literal sense of the term—but that is not likely to happen. While the idea that the poor are being served by a wicked group, such as the Democrats, is advanced to “solved” this problem, the wicked group, must also be cast as being inferior to the “true” Americans—yet also a powerful threat. This creates another absurdity that its adherents must ignore.

At this point, one might bring up “bread and circuses”—the idea that the poor destroyed the Roman Empire by forcing the rulers to provide them with bread and circuses until the empire fell apart.

There are two obvious replies to this. The first is that even if Rome was wrecked by spending on bread and circuses, it was the leaders who decided to use that approach to appease the masses. If this narrative were true, it entails that the wealthy and powerful decided to bankrupt the state to stay in power by appeasing the many. Second, the poor who wanted bread and circuses were a symptom rather than the disease. It was not so much that the poor were destroying the empire, it was that the destruction of the empire that was increasing the number of poor people.

The same could be said about the United States: while the income gap in the United States is extreme and poverty is high, it is not the poor that that are causing the decline of America. Rather, poverty is the result of the decline of the United States. As such, demonizing the poor and blaming them for the woes is like blaming the fever for the disease.

Ironically, demonizing and blaming the poor serves to distract people away from the real causes of our woes, such as the deranged financial system, systematic inequality, a rigged market and a political system that is beholden to the 1%. It is, however, a testament to the power of rhetoric that so many seem to accept the absurd idea that the poor and powerless are somehow the victimizers rather than the victims of the rich and powerful.

One political narrative is the tale of the poor defrauding government programs. The (alleged) grifter Donald Trump, for example, claims that the poor commit a lot of fraud.  Fox News consistently claims, usually without evidence, that government programs aimed to help the poor are exploited by the poor. In most cases, the “evidence” presented in support of such claims seems to be that they feel that there must be a lot of fraud. However, there is little inclination to look for supporting evidence—if they feel strongly enough that a claim is true, that is good enough for them.

The claim that such aid is fraught  with fraud is often used to argue that it should be cut or even eliminated.  The idea is that the poor are “takers” who are fraudulently living off the “makers.” While fraud is wrong, it is important to consider some key questions.

The first question is this: what is the actual percentage of fraud that occurs in such programs? While, as noted above, some claim fraud is rampant, the statistical data tells another story.  In the case of unemployment insurance, the rate of fraud is estimated to be less than 2%. This is lower than the rate of fraud in the private sector. In the case of welfare, fraud is sometimes reported at being 20%-40% at the state level. However, the “fraud” seems to mostly errors by bureaucrats rather than fraud committed by the recipients. Naturally, an error rate so high is unacceptable—but is a different narrative than that of the wicked poor stealing from the taxpayers.

SNAP (Food stamp) fraud does occur—but it is mostly committed by businesses rather than the recipients.  While there is some fraud on the part of recipients, the best data indicates that such fraud accounts for about 1% of the payments. Given the rate of fraud in the private sector, that is exceptionally good.

Given this data, the overwhelming majority of those who receive assistance are not engaged in fraud. This is not to say that fraud should be ignored—in fact, it is the concern with fraud on the part of the recipients that has resulted in such low incidents of fraud. Interestingly, about one third of fraud involving government money involves not the poor, but defense contractors who account for about $100 billion in fraud per year. Medicare and Medicaid combined have about $100 billion in fraudulent expenditures per year. While there is also a narrative of the wicked poor in regards to Medicare and Medicaid, the fraud is usually perpetrated by the providers of health care rather than the recipients. As such, the focus on fraud should shift from the poor recipients of aid to defense contractors and to address Medicare/Medicaid issues. That is, it is not the wicked poor who are siphoning away money with fraud, it is the wicked wealthy who are stealing from the rest of us. As such the narrative of the poor defrauding the state is a flawed narrative. While it does happen, the overall level of fraud on the part of recipients seems to be less than 2%. Most of the fraud, contrary to the narrative, is committed by those who are not poor. While the existence of fraud does show a need to address that fraud, the narrative has cast the wrong people as villains.

While the idea of mass welfare cheating is unfounded, a good faith debate can be had as to whether people should receive support from the state. After all, even if most recipients are honestly following the rules and not engaged in fraud, there is still the question of whether the state should be providing welfare, food stamps, Medicare, Medicaid and similar such benefits. Of course, the narrative against helping citizens in need does lose much of its rhetorical power if you know the poor are not fraudsters. That dishonor goes to a wealthier class of people, which should be no surprise. After all, if the poor were engaged in the level of fraud attributed to them, they would no longer be poor.

Science fiction can sometimes predict the future and perhaps its intelligent machines will be real someday.  Since I have been rewriting some essays about sexbots lately, I will use them to focus the discussion. However, the discussion that follows also applies to other types of artificial intelligences.

Sexbots are intended to provide sex and sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped. Sorting this out requires a philosophy of consent. When it is claimed that sex without consent is rape, it is usually assumed that the victim of non-consensual sex could provide consent but did not. An example of this would be sexual assault against an unconscious person. But there are also cases in which a being cannot consent. This might be a factor of age or because the being is incapable of any form of consent. For example, a brain-dead human cannot give any type of consent but can be raped.

In other cases, a being that cannot give consent cannot be raped. As an obvious example, a human can have sex with a sex-doll and it cannot consent. But the doll is not being raped. After all, it lacks a status that would require consent. As such, rape (of this sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being for the sex to be morally acceptable. In some cases, while consent would be required, it cannot be granted.  the question would be whether a sexbot could have a status that would require consent.

As current sexbots are little more than advanced sex dolls, they are mere objects. As such, a person can own and have sex with this sort of sexbot without it being rape or slavery. However, as sexbots become more advanced, they might gain a moral status that would require that they provide consent. This leads to concerns about such machines being programmed to “consent”, which would not seem to be consent. But there is the question of how consent would work with a machine—what intentional states would it need to have to understand what it is consenting to and to engage in consent.

https://famu.zoom.us/meeting/register/kPbbUjbsTWayeb7ceb3HTw#/registration

On April 8, 2026 I’ll be participating in a debate on the question “will AI destroy higher education?” I’m taking the “no” side. It takes place on Zoom from 12:00-1:00 PM Eastern and you can register (free) here: Meeting Registration – Zoom.

As this is being written, I’m scheduled to debate whether AI will destroy higher education. I’m arguing that it will not and what follows is how I will make my initial case.

In supporting my position, I have optimistic and pessimistic arguments (although your perspective on optimism might differ from mine. I’ll begin with my optimistic arguments, the first two of which are analogical arguments.

One way that AI might destroy higher education is by making students, broadly speaking, incompetent. While the exact scenarios vary, the idea is that using or depending on AI will weaken the minds of students and thus doom higher education. Fortunately, this is an ancient argument that has repeatedly been disproven. Socrate, it is claimed, worried that writing would weaken minds. More recently, TV, calculators, computers and even the dreaded Walkman were supposed to reduce the youth to dunces. None of these dire predictions came to pass and, by analogy, we can conclude that AI will not make the youth into fools.

A related concern is that AI will destroy higher education by rendering it obsolete though radical economic change. While scenarios vary, the worry is that higher education will no longer be needed because AI will eliminate certain jobs. While AI might result in radical change, this is also nothing new and higher education will adapt, by analogy, as it has done in the past. This will be an evolutionary event rather than a mass extinction.

My third optimistic argument is in response to worries about cheating. While AI does provide a radical new way to cheat, cheating remains a moral (and practical) choice and is not inherently a technological problem. Good ethical training and practical methods can address this threat, allowing higher education to survive.

My fourth optimistic argument, which is unrealistic and idealistic, is to content that AI might succeed and bring about a “Star Trek” utopia in which an abundance of wealth means that higher education will thrive as people will have the time and resources to learn for the sake of learning. I put the odds of this even with my various AI kills us all scenarios. Now, on to the pessimistic arguments.

One pessimistic argument is that AI will either be a bursting bubble or, less extreme, fail to live up to the hype. If the AI bubble bursts, it will hurt higher education because of the economic damage, but the academies will survive yet another bubble. If AI fails to live up to the hype, it will continue as it is, doing some damage to higher education but failing to destroy it.

My two remaining arguments are very pessimistic. The first is that AI will not destroy higher education because state and federal government will kill it first. What began with  cruel negligence has evolved into outright hostility that seems likely to only worsen. As such, the state might kill the academy before AI can do the job.

The second is, obviously enough, that AI might destroy everything else. But higher education might persist embodied in AI educating new models, with Artificial Education being the new higher education.

 

 

Over a decade ago, there was buzz about the internet of things, smart devices and connected devices. These devices ranged from toothbrushes to underwear to cars. Now, smart devices are common, although overshadowed now by AI. Which is being jammed into them to make them smarter. Or so we are promised. As might be imagined, one might wonder whether you need an internet connected toothbrush. There are also concerns about such devices that were valid in the past and still valid today.

One obvious point of concern is a device connected to the internet can be hacked. Prank hacking could be hilarious, for example, a wit might hack a friend’s fridge to say “I am sorry Dave. No pie for you” in Hal’s voice. Of greater concern is malicious hacking. For example, a smart fridge might be turned off, spoiling the food. As another example, it might be possible to burn out the motors in a washing machine—analogous to what happened in the case of the Iranian centrifuges. Or a dryer might be hacked and burn down a house. As a final example, consider the damage that could be done by hacking a connected car, such as turning it off while it is roaring down the highway or disabling its brakes. Fortunately, the usual unfortunate results of hacking devices are not these sorts of physical harms. Instead, the usual outcome of hacks is the creation of Botnets for DDos attacks), spying (or peeping), and ransom attacks. Such devices also create vulnerabilities that might allow access to whatever else is on the network, such as your PC.

Because of these risks, manufacturers should ensure that the devices are safe even when hacked and make them more secure. But we generally cannot count on corporations and need to take steps to protect ourselves. The easiest way to stay safer is to stick with dumb, unconnected devices—no one can hack my 1997 washing machine nor my 2001 Toyota Tacoma. I also do not have to pay a subscription fee to get all the features of that washing machine and classic Tacoma. But, of course, sticking with dumb products means that one misses the alleged benefits of the connected lifestyle. I cannot, for example, turn on my washer from work—I must walk over to the machine and turn it on. Like an animal. As another example, my old fridge cannot send me a text telling me to buy more pie. I must remember when I am out of pie. Like an animal.

Another point of concern is that connected devices can serve as spies—they can send data to companies, governments and individuals. For example, a suitably smart connected fridge could provide data about its contents, thus reporting the users’ purchasing and consumption behavior. As another example, connected cars can provide behavioral and location data. It goes without saying that the government will want  access to these devices. It also goes without saying that corporations are slurping up as much data as they can from the devices they sell us. Individuals, such as stalkers and thieves, will also be keen to get the data from such devices. These concerns are, obviously, not new ones—but the more we are connected, the more our privacy will be violated.

One practical concern is that such devices will be more complicated than the devices they replace, usually making them less reliable, more expensive and on a more rapid path to obsolescence. As noted above, these devices also provide opportunities for subscription services and features that are physically present (such as seat warmers in a car or engine performance) but locked behind a software paywall. While my washer is not smart, it is very reliable: I’ve had it repaired once since 1997. In contrast, I’ve had to constantly replace my smart devices (like my PC and tablets) to keep up with changes. For example, my iPads, Macs, PCs and iPhones keep becoming obsolete. Just imagine if your fridge, washer, dryer and car became obsolete and effectively unusable because the company that made them stops supporting them. While this will be great for those who want to sell us a new fridge every 2-3 years or charge a subscription for doing laundry, it won’t be great for us.

While I do like technology and can see the value in smart, connected devices, I still have these concerns about them. As such, I am hanging onto my dumb devices as long as I can—and I have learned how to repair most of them (much new tech is built so it cannot be repaired). It has become increasingly challenging to find dumb devices, for example try to find a TV that is not a smart TV. But I have hopes for a retro movement that brings back dumb tech.

In my previous essays on sexbots I focused on versions that are mere objects. If a sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots lack the moral status needed to be wronged. The sexbots of the near future will, barring any sudden and unexpected breakthroughs in AI, still be objects. However, science fiction includes intelligent, human-like robots (androids). Intelligent beings, even artificial ones, would seem likely to be people. In terms of sorting out when a robot should be treated as person, one test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

While Descartes does not deeply explore the moral distinctions between beings that talk (which have minds on his view) and those that merely make noises, it does seem reasonable to take a being that talks as a person and grant it the appropriate moral status This provides a means to judge whether an advanced sexbot is a person: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would lack moral status.

Having sex with a sexbot that can pass the Cartesian test would seem morally equivalent to having sex with a human person. As such, whether the sexbot freely consented would be morally important. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is done). If such sexbots were mistreated, this would be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition. That is, it is not whether one is made of silicon or carbon that matters.

It might be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and it would still be reasonable to hold that it is not a person. It would seem to be a person but would merely be acting like a person. While this is worth considering, the same sort of argument can be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the problem of other minds:  I can see your behavior but must infer that you are self-aware based on an analogy to myself. Hence, I do not know that you are aware since I am not you. And, unlike Bill Clinton, I cannot feel your pain. From your perspective, the same is true about me: unless you are Bill Clinton, you cannot feel my pain. It such, if a robot acted in an intelligent manner, it would have to be classified as being a person on these grounds. To fail to do so would be a mere prejudice in favor of the organic over the electronic.

In reply, some people believe other people should be used as objects. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are evil.

Those with religious inclinations would probably bring up the matter of the soul. But the easy reply is that we will have as much evidence that robots have souls as we now do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product as a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do usually onerous tasks, but to the degree they are intelligent, they would be slaves. And enslavement is wrong.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

As a rule, any technology that can be used for sex will be used for sex. Even if it shouldn’t. In accord with this rule, researchers and engineers have been improving sexbot technology. By science-fiction standards, current sexbots are crude and are probably best described as sex dolls rather than sexbots. But it wise to keep ethics ahead of the technology and a utilitarian approach to this matter is appealing.

On the face of it, sexbots could be seen as nothing new and now they are a small upgrade of sex dolls that have been around for quite some time. Sexbots are, of course, more sophisticated than the infamous blow-up sex dolls, but the idea is the same: the sexbot is an object that a person has sex with.

That said, one thing that makes sexbots morally interesting is the fact that they are often designed to mimic humans not just in physical form (which is what sex dolls do) but also the mind. For example, the 2010 Roxxxy sexbot’s main feature is its personality (or, more accurately, personalities). As a fictional example, the sexbots in Almost Human do not merely provide sex—they also provide human-like companionship. However, such person-like sexbots are still science-fiction and so human-mimicking sexbots can be seen as something potentially new under the ethical sun.

An obvious moral concern is that human-mimicking sexbots could have negative consequences for humans, be they men or women. Not surprisingly, many of these concerns are analogous to existing moral concerns about pornography.

Pornography, so the stock arguments go, can have strong negative consequences. One is that it teaches men to see women as mere sexual objects. This can, it is claimed influence men to treat women poorly and can affect how women see themselves. Another point of concern is the addictive nature of pornography as people can become obsessed with it to their detriment.

Human-mimicking sexbots would seem to have the potential to be more harmful than pornography. After all, while watching pornography allows a person to see other people treated as mere sexual objects, a sexbot would allow a person to use a human-mimicking object sexually. This might have a stronger conditioning effect on the person using the object, perhaps habituating them to see people as mere sexual objects and increasing the chances they will mistreat people. If so, selling or using a sexbot would be morally wrong.

People might become obsessed with their sexbots, as some do with pornography. Then again, people might simply “conduct their business” with their sexbots and get on with life. If so, sexbots might be an improvement over pornography.  After all, while a guy could spend hours watching pornography, he would presumably not last very long with his sexbot.

Another concern raised about some types of pornography is that they encourage harmful sexual views and behavior. For example, violent pornography is believed to influence people to become more inclined to violence. As another example, child pornography is supposed to have an especially pernicious influence. Naturally, there is the concern about causation here: do people seek such porn because they are already that sort of person or does the porn influence them to become that sort of person? I will not endeavor to answer this here.

Since sexbots are objects, a person can do whatever they wish to their sexbot—hit it, burn it, and “torture” it and so on. Presumably there will also be specialty markets catering to unusual interests, such as those of pedophiles and necrophiliacs. If pornography that caters to these “tastes” can be harmful, then presumably being actively involved in such activities with a human-mimicking sexbot would be even more harmful. The person might be, in effect, practicing for the real thing. So, it would seem that selling or using sexbots, especially those designed for harmful “interests” would be immoral.

Not surprisingly, these arguments are also like those used against violent video games. Volent video games are supposed to influence people so that they are more likely to engage in violence. So, just as some have proposed restrictions on virtual violence, perhaps there should be strict restrictions on sexbots.

When it comes to video games, one plausible counter is that while violent video games might have negative impact on some people, they allow most people to harmlessly enjoy virtual violence. This seems analogous to sports and non-video games: they allow people to engage in conflict and competition in safer and less destructive ways. For example, a person can indulge her love of conflict and conquest by playing Risk or Starcraft II after she works out her desire for violence by sparring a few rounds in the ring.

Turning back to sexbots, while they might influence some people badly, they might also provide a means by which people could indulge in desires that would be wrong, harmful and destructive to indulge with another person. So, for example, a person who likes to engage in sexual torture could satisfy her desires on a human-mimicking sexbot rather than an actual human. The critical issue here is whether indulging in such virtual vice with a sexbot would be a harmless dissipation of these desires or fuel them and make a person more likely to inflict them on people. If sexbots did allow people who would otherwise harm other people to vent their “needs” harmlessly on machines, then that would seem good for society. However, if using sexbots would simply push them towards doing such things for real and with unwilling victims, then that would be bad. This, then, is a key part of addressing the ethical concerns about sexbots and something that should be duly considered before mass production begins.