The philosophical problem of other minds is an epistemic challenge: while I know I have a mind, how do I know if other beings have minds as well? The practical problem of knowing whether another person’s words match what they are thinking also falls under this problem. For example, if someone says they love you, how do you know if they feel that professed love?

Descartes, in his discussion of whether animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language.

His idea is that if something talks, then it is reasonable to see it as a thinking being. Descartes was careful to distinguish between what mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

This Cartesian approach was explicitly applied to machines by Alan Turing in his Turing test. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test.

Not surprisingly, technological advances have resulted in computers that can engage in behavior that appears to involve using language in ways that might pass the test. Over a decade ago IBM’s Watson won at Jeopardy in 2011 and then upped its game by engaging in debate regarding violence and video games. Since Watson, billions have been poured into AI and some claim that AI models can pass the Turing test.

Long ago, in response to Watson, I jokingly suggested a new test to Patrick Lin: the trolling test. In this context, a troll is someone “who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”

While trolls are apparently awful people (a hateful blend of Machiavellianism, narcissism, sadism and psychopathy) and trolling is certainly undesirable behavior, the trolling test does seem worth considering.

In the abstract, the test would work like the Turing test but would involve a human troll and a computer attempting to troll. The challenge would be for the computer troll to successfully pass as human troll.

Obviously enough, a computer can easily be programmed to post random provocative comments from a database. However, the real meat (or silicon) of the challenge comes from the computer being able to engage in (ironically) relevant trolling. That is, the computer would need to engage the other commentators in true trolling.

As a controlled test, the trolling computer (“mechatroll”) would “read” and analyze a selected blog post. The post would then be commented on by human participants—some engaging in normal discussion and some engaging in trolling. The mechatroll would then endeavor to troll the human participants (and, for bonus points, to troll the trolls) by analyzing the comments and creating appropriately trollish comments.

Another option is to have an actual live field test. A specific blog site would be selected that is frequented by human trolls and non-trolls. The mechatroll would then endeavor to engage in trolling on that site by analyzing the posts and comments.

In either test scenario, if the mechatroll were able to troll in a way indistinguishable from the human trolls, then it would pass the trolling test.

While “stupid mechatrolling”, such as just posting random hateful and irrelevant comments, is easy, true mechatrolling would be difficult. After all, the mechatroll would need to be able to analyze the original posts and comments to determine the subjects and the direction of the discussion. The mechatroll would then need to make comments that would be trollishly relevant and this would require selecting those that would be indistinguishable from those generated by a narcissistic, Machiavellian, psychopathic, and sadistic human.

Years ago, I thought that creating a mechatroll might be an interesting project because modeling such behavior could provide useful insights into human trolls and the traits that make them trolls. As far as a practical application, such a system could have been developed into a troll-filter to help control the troll population. I’m confident that the current LLMs could engage in trolling with the proper prompts, although they would lack the true soul of the troll.

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

Before the Trump regime, the United States miliary expressed interest in developing robots capable of moral reasoning and provided grant money to support such research. Other nations are no doubt also interested.  

The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of moral constraints.

While there are various reasons to imbue robots with ethics (or at least pretend to do so), one is public relations. Thanks to science fiction dating at least back to Frankenstein, people worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics might reassure the public. Another reason is to make the public relations gimmick a reality—to place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds. But considering the ethics of war endorsed by the Trump regime, they are probably not interested in ethical war machines.

While science fiction features ethical robots, the authors (like philosophers) are vague about how robot ethics works. In the case of intelligent robots, their ethics might work the way ours does—which is a mystery debated by philosophers and scientists to this day. While AI has improved thanks to massive processing power, it does not have human-like ethical capacity, so the current practical challenge is to develop ethics for the autonomous or semi-autonomous robots we can build now.

While creating ethics for robots might seem daunting, the limitations of current robot technology means robot ethics is a matter of programming these machines to operate in specific ways defined by whatever ethical system is used. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I see shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot them. Sorting out recognizing weapons would be a programming feat, likely with people dying in the process.

While a suitably programmed robot would act in a way that seemed ethical, the robot would not be engaged in ethical behavior. After all, it is merely a more complex version of an automatic door. A supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil.  Likewise, the killbot that does not shoot you because its cameras show you are unarmed is not ethical. The killbot that chops you into chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.

To be fair to killbots, perhaps humans are not ethical or unethical under these requirements—we could just be meat-bots operating under the illusion of ethics. Also, it is sensible to focus on the practical aspect of the matter: if you are targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you. As such, the general practical problem is getting our killbots to behave in accord with our ethical values. Or, in the case of the Trump regime, a lack of ethics.

Achieving this goal involves three steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior relative to civilians. This would require creating a definition of civilian  that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize.  The third step would be coding that behavior in whatever programming  is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers or those typing the prompts into an AI program would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender to get close enough to destroy the killbot).

Since these robots would be following programmed rules, they would seem to be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.

A  practical question is whether the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy to imagine ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).

One impact for this research will be that some people will get to live the science-fiction dream of teaching robots to be good. That way the robots might feel a little bad when they kill us all.

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

When a new technology emerges, it is often claimed that it is outpacing ethics and law. Because of the nature of law in the United States, it is easy for technology to outpace it, especially given the average age of members of Congress. However, it is difficult for technology to outpace ethics.

One reason is that any minimally adequate ethical theory will have the quality of expandability. That is, the theory can be applied to what is new, be that technology, circumstances or something else. An ethical theory that lacks the capacity of expandability would become useless immediately and would not be much of a theory.

It is, however, worth considering that a new technology could “break” an ethical theory in that the theory could not expand to cover the technology. However, this would seem to show that the theory was inadequate rather than showing the technology outpaced ethics.

Another reason technology would have a hard time outpacing ethics is that an ethical argument by analogy can (probably) be applied to new technology. That is, if the technology is like something that exists and has been discussed in ethics, this ethical discussion can be applied to the new technology. This is analogous to using ethical analogies to apply ethics to different specific situations, such as an act of cheating in a relationship.

Naturally, if a new technology is absolutely unlike anything else in human experience (even fiction), then the method of analogy would fail absolutely. However, it seems unlikely that such a technology could emerge. But I like science fiction (and fantasy) and am willing to entertain the possibility of an absolutely new technology. While it would seem that existing ethics could handle, but perhaps something absolutely new would break all existing ethical theories, showing they are all inadequate.

While a single example does not provide much in the way of proof, it can be used to illustrate. As such, I will use the matter of personal drones to illustrate how ethics is not outpaced by technology.

While remote controlled and automated devices have been around a long time, the expansion of technology created something new for ethics: drones, driverless cars,  AI, Facebook, and so on. However, drone ethics is easy. By this I do not mean that ethics is easy, it is just that applying ethics to new technology (such as drones) is not as hard as some might claim. Naturally, doing ethics is hard—but this applies to very old problems (the ethics of war) and very “new” problems (the ethics of killer robots in war).

Getting back to the example, a personal drone is one that tends to be much smaller, lower priced and easier to use relative to government operated drones. In many ways, these drones are slightly advanced versions of the remote-control planes that are regarded as expensive toys. Drones of this sort that most concern people are those that have cameras and can hover—perhaps outside a bedroom window.

Two areas of concern are safety and privacy. In terms of safety, the worry is that drones can collide with people (or vehicles, such as manned aircraft) and injure them. Ethically, this falls under doing harm to people, be it with a knife, gun or drone. While a flying drone flies about, the ethics that have been used to handle flying model aircraft, cars, etc. can be applied here. So, this aspect of drones did not outpace ethics.

Privacy can also be handled. Simplifying things for the sake of a brief discussion, a drone allows a person to (potentially) violate privacy in the usual two “visual” modes. One is to intrude into private property to violate a person’s privacy. In the case of the “old” way, a person can put a ladder against a person’s house and climb up to peek through a window. In the “new” way, a person can fly a drone up to the window and peek in using a camera. While the person is not physically present in the case of the drone, their “agent” is present and is trespassing. Whether a person is using a ladder or a drone to gain access to the window does not change the ethics of the situation.

A second way is to peek into private space from public space. In the case of the old way a person could, for example,  stand on the public sidewalk and look into other peoples’ windows or yards. In the “new” way, a person can deploy his agent (the drone) in public space to do the same sort of thing.

One potential difference between the two situations is that a drone can fly and thus can get viewing angles that a person on the ground (or even with a ladder) could. For example, a drone might be in the airspace far above a person’s backyard, sending images of someone sunbathing in the nude behind her very tall fence on her very large estate. However, this is not a new situation—paparazzi have used helicopters to get shots of celebrities, and the ethics are the same. As such, ethics has not been outpaced by the drones in this regard.  This is not to say that the matter is solved people are still debating the ethics of this sort of “spying”, but to say that it is not a case where technology has outpaced ethics.

What is mainly different about the drones is that they are now affordable and easy to use—so whereas only certain people could afford to hire a helicopter to get photos of celebrities, now camera-equipped drones are easily in reach of the hobbyist. So, it is not that the low priced drone provides new capabilities, it is that it puts these capabilities in the hands of the many.

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

Science fiction can sometimes predict the future and perhaps its intelligent machines will be real someday.  Since I have been rewriting some essays about sexbots lately, I will use them to focus the discussion. However, the discussion that follows also applies to other types of artificial intelligences.

Sexbots are intended to provide sex and sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped. Sorting this out requires a philosophy of consent. When it is claimed that sex without consent is rape, it is usually assumed that the victim of non-consensual sex could provide consent but did not. An example of this would be sexual assault against an unconscious person. But there are also cases in which a being cannot consent. This might be a factor of age or because the being is incapable of any form of consent. For example, a brain-dead human cannot give any type of consent but can be raped.

In other cases, a being that cannot give consent cannot be raped. As an obvious example, a human can have sex with a sex-doll and it cannot consent. But the doll is not being raped. After all, it lacks a status that would require consent. As such, rape (of this sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being for the sex to be morally acceptable. In some cases, while consent would be required, it cannot be granted.  the question would be whether a sexbot could have a status that would require consent.

As current sexbots are little more than advanced sex dolls, they are mere objects. As such, a person can own and have sex with this sort of sexbot without it being rape or slavery. However, as sexbots become more advanced, they might gain a moral status that would require that they provide consent. This leads to concerns about such machines being programmed to “consent”, which would not seem to be consent. But there is the question of how consent would work with a machine—what intentional states would it need to have to understand what it is consenting to and to engage in consent.

Over a decade ago, there was buzz about the internet of things, smart devices and connected devices. These devices ranged from toothbrushes to underwear to cars. Now, smart devices are common, although overshadowed now by AI. Which is being jammed into them to make them smarter. Or so we are promised. As might be imagined, one might wonder whether you need an internet connected toothbrush. There are also concerns about such devices that were valid in the past and still valid today.

One obvious point of concern is a device connected to the internet can be hacked. Prank hacking could be hilarious, for example, a wit might hack a friend’s fridge to say “I am sorry Dave. No pie for you” in Hal’s voice. Of greater concern is malicious hacking. For example, a smart fridge might be turned off, spoiling the food. As another example, it might be possible to burn out the motors in a washing machine—analogous to what happened in the case of the Iranian centrifuges. Or a dryer might be hacked and burn down a house. As a final example, consider the damage that could be done by hacking a connected car, such as turning it off while it is roaring down the highway or disabling its brakes. Fortunately, the usual unfortunate results of hacking devices are not these sorts of physical harms. Instead, the usual outcome of hacks is the creation of Botnets for DDos attacks), spying (or peeping), and ransom attacks. Such devices also create vulnerabilities that might allow access to whatever else is on the network, such as your PC.

Because of these risks, manufacturers should ensure that the devices are safe even when hacked and make them more secure. But we generally cannot count on corporations and need to take steps to protect ourselves. The easiest way to stay safer is to stick with dumb, unconnected devices—no one can hack my 1997 washing machine nor my 2001 Toyota Tacoma. I also do not have to pay a subscription fee to get all the features of that washing machine and classic Tacoma. But, of course, sticking with dumb products means that one misses the alleged benefits of the connected lifestyle. I cannot, for example, turn on my washer from work—I must walk over to the machine and turn it on. Like an animal. As another example, my old fridge cannot send me a text telling me to buy more pie. I must remember when I am out of pie. Like an animal.

Another point of concern is that connected devices can serve as spies—they can send data to companies, governments and individuals. For example, a suitably smart connected fridge could provide data about its contents, thus reporting the users’ purchasing and consumption behavior. As another example, connected cars can provide behavioral and location data. It goes without saying that the government will want  access to these devices. It also goes without saying that corporations are slurping up as much data as they can from the devices they sell us. Individuals, such as stalkers and thieves, will also be keen to get the data from such devices. These concerns are, obviously, not new ones—but the more we are connected, the more our privacy will be violated.

One practical concern is that such devices will be more complicated than the devices they replace, usually making them less reliable, more expensive and on a more rapid path to obsolescence. As noted above, these devices also provide opportunities for subscription services and features that are physically present (such as seat warmers in a car or engine performance) but locked behind a software paywall. While my washer is not smart, it is very reliable: I’ve had it repaired once since 1997. In contrast, I’ve had to constantly replace my smart devices (like my PC and tablets) to keep up with changes. For example, my iPads, Macs, PCs and iPhones keep becoming obsolete. Just imagine if your fridge, washer, dryer and car became obsolete and effectively unusable because the company that made them stops supporting them. While this will be great for those who want to sell us a new fridge every 2-3 years or charge a subscription for doing laundry, it won’t be great for us.

While I do like technology and can see the value in smart, connected devices, I still have these concerns about them. As such, I am hanging onto my dumb devices as long as I can—and I have learned how to repair most of them (much new tech is built so it cannot be repaired). It has become increasingly challenging to find dumb devices, for example try to find a TV that is not a smart TV. But I have hopes for a retro movement that brings back dumb tech.

In my previous essays on sexbots I focused on versions that are mere objects. If a sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots lack the moral status needed to be wronged. The sexbots of the near future will, barring any sudden and unexpected breakthroughs in AI, still be objects. However, science fiction includes intelligent, human-like robots (androids). Intelligent beings, even artificial ones, would seem likely to be people. In terms of sorting out when a robot should be treated as person, one test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

While Descartes does not deeply explore the moral distinctions between beings that talk (which have minds on his view) and those that merely make noises, it does seem reasonable to take a being that talks as a person and grant it the appropriate moral status This provides a means to judge whether an advanced sexbot is a person: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would lack moral status.

Having sex with a sexbot that can pass the Cartesian test would seem morally equivalent to having sex with a human person. As such, whether the sexbot freely consented would be morally important. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is done). If such sexbots were mistreated, this would be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition. That is, it is not whether one is made of silicon or carbon that matters.

It might be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and it would still be reasonable to hold that it is not a person. It would seem to be a person but would merely be acting like a person. While this is worth considering, the same sort of argument can be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the problem of other minds:  I can see your behavior but must infer that you are self-aware based on an analogy to myself. Hence, I do not know that you are aware since I am not you. And, unlike Bill Clinton, I cannot feel your pain. From your perspective, the same is true about me: unless you are Bill Clinton, you cannot feel my pain. It such, if a robot acted in an intelligent manner, it would have to be classified as being a person on these grounds. To fail to do so would be a mere prejudice in favor of the organic over the electronic.

In reply, some people believe other people should be used as objects. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are evil.

Those with religious inclinations would probably bring up the matter of the soul. But the easy reply is that we will have as much evidence that robots have souls as we now do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product as a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do usually onerous tasks, but to the degree they are intelligent, they would be slaves. And enslavement is wrong.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

As a rule, any technology that can be used for sex will be used for sex. Even if it shouldn’t. In accord with this rule, researchers and engineers have been improving sexbot technology. By science-fiction standards, current sexbots are crude and are probably best described as sex dolls rather than sexbots. But it wise to keep ethics ahead of the technology and a utilitarian approach to this matter is appealing.

On the face of it, sexbots could be seen as nothing new and now they are a small upgrade of sex dolls that have been around for quite some time. Sexbots are, of course, more sophisticated than the infamous blow-up sex dolls, but the idea is the same: the sexbot is an object that a person has sex with.

That said, one thing that makes sexbots morally interesting is the fact that they are often designed to mimic humans not just in physical form (which is what sex dolls do) but also the mind. For example, the 2010 Roxxxy sexbot’s main feature is its personality (or, more accurately, personalities). As a fictional example, the sexbots in Almost Human do not merely provide sex—they also provide human-like companionship. However, such person-like sexbots are still science-fiction and so human-mimicking sexbots can be seen as something potentially new under the ethical sun.

An obvious moral concern is that human-mimicking sexbots could have negative consequences for humans, be they men or women. Not surprisingly, many of these concerns are analogous to existing moral concerns about pornography.

Pornography, so the stock arguments go, can have strong negative consequences. One is that it teaches men to see women as mere sexual objects. This can, it is claimed influence men to treat women poorly and can affect how women see themselves. Another point of concern is the addictive nature of pornography as people can become obsessed with it to their detriment.

Human-mimicking sexbots would seem to have the potential to be more harmful than pornography. After all, while watching pornography allows a person to see other people treated as mere sexual objects, a sexbot would allow a person to use a human-mimicking object sexually. This might have a stronger conditioning effect on the person using the object, perhaps habituating them to see people as mere sexual objects and increasing the chances they will mistreat people. If so, selling or using a sexbot would be morally wrong.

People might become obsessed with their sexbots, as some do with pornography. Then again, people might simply “conduct their business” with their sexbots and get on with life. If so, sexbots might be an improvement over pornography.  After all, while a guy could spend hours watching pornography, he would presumably not last very long with his sexbot.

Another concern raised about some types of pornography is that they encourage harmful sexual views and behavior. For example, violent pornography is believed to influence people to become more inclined to violence. As another example, child pornography is supposed to have an especially pernicious influence. Naturally, there is the concern about causation here: do people seek such porn because they are already that sort of person or does the porn influence them to become that sort of person? I will not endeavor to answer this here.

Since sexbots are objects, a person can do whatever they wish to their sexbot—hit it, burn it, and “torture” it and so on. Presumably there will also be specialty markets catering to unusual interests, such as those of pedophiles and necrophiliacs. If pornography that caters to these “tastes” can be harmful, then presumably being actively involved in such activities with a human-mimicking sexbot would be even more harmful. The person might be, in effect, practicing for the real thing. So, it would seem that selling or using sexbots, especially those designed for harmful “interests” would be immoral.

Not surprisingly, these arguments are also like those used against violent video games. Volent video games are supposed to influence people so that they are more likely to engage in violence. So, just as some have proposed restrictions on virtual violence, perhaps there should be strict restrictions on sexbots.

When it comes to video games, one plausible counter is that while violent video games might have negative impact on some people, they allow most people to harmlessly enjoy virtual violence. This seems analogous to sports and non-video games: they allow people to engage in conflict and competition in safer and less destructive ways. For example, a person can indulge her love of conflict and conquest by playing Risk or Starcraft II after she works out her desire for violence by sparring a few rounds in the ring.

Turning back to sexbots, while they might influence some people badly, they might also provide a means by which people could indulge in desires that would be wrong, harmful and destructive to indulge with another person. So, for example, a person who likes to engage in sexual torture could satisfy her desires on a human-mimicking sexbot rather than an actual human. The critical issue here is whether indulging in such virtual vice with a sexbot would be a harmless dissipation of these desires or fuel them and make a person more likely to inflict them on people. If sexbots did allow people who would otherwise harm other people to vent their “needs” harmlessly on machines, then that would seem good for society. However, if using sexbots would simply push them towards doing such things for real and with unwilling victims, then that would be bad. This, then, is a key part of addressing the ethical concerns about sexbots and something that should be duly considered before mass production begins.

Many years ago, the sci-fi buddy cop show Almost Human episode on sexbots inspired me to revisit the ethics of sexbots. While the advanced, human-like models of the show are still fictional, the technological foundations needed for sexbots do exist, as companies are manufacturing humanoid robots. As such, it seems well worth considering, once again, the ethical issues involving sexbots real and fictional.

At this time, sexbots are mere objects—while usually made to look like humans, they do not have the qualities that would make them even person-like. As such, ethical concerns about these sexbots do not involve concerns about wrongs done to the objects—presumably they cannot be wronged. But by using Kant’s discussion of ethics and animals, it is possible to build a moral view of even basic sexbots that are indisputably objects.

In his ethical theory Kant is clear that animals are means rather than ends and are mere objects. Rational beings, in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) choose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims we have no direct duties to animals. Despite being living beings, they are also just among the “objects of our inclinations” that derive value from the value we give them. Sexbots would, obviously, qualify as paradigm “objects of our inclinations.”

While it might seem odd, Kant argues that we should treat animals well. However, he does so while also trying to avoid giving animals any moral status of their own. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would obligate us to that human, then an animal doing a similar thing would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (because, according to Kant, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot a old dog that has become a burden?

Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will probably be damaged. Since, as Kant sees it, humans have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle in his handling of a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are practice for us: how we treat them habituates us in how we treat human beings.

Current sexbots obviously lack any meaningful moral status of their own. They do not feel or think—they are mere machines that might be made to look like humans. As such, they lack all qualities that might give them a moral status of their own.

Oddly enough, sexbots could be taken as being comparable to animals, at least as Kant sees them. After all, for him animals are mere objects and have no moral status of their own. Likewise for sexbots. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well, and not just because he is very dead. This might also apply to sexbots. That is, perhaps it makes no sense to talk about good or bad relative to such objects. Thus, a key issue is whether sexbots are more like animals or more like stones—at least in terms of the matter at hand.

If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person involved. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also extend to sexbots. For example, if engaging in certain activities with a sexbot would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with a sexbot would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior. It is also worth considering that perhaps people should not engage in any behavior with sexbots—that having sex of any kind with a bot would be damaging to the person’s humanity.

Interestingly enough (or boringly enough), this sort of argument is often employed to argue against people watching pornography. The gist of such arguments is that viewing pornography can condition people (typically men) to behave badly in real life or at least have a negative impact on their character. If pornography can have this effect, then it seems reasonable to be concerned about the potential impact of sexbots on people. After all, pornography casts a person in a passive role viewing other people acting as sexual objects, while a sexbot allows a person to have sex with an actual sexual object.

While asteroid mining is still science fiction, companies are already preparing to mine the sky. While space mining sounds awesome, lawyers are murdering the awesomeness with legalize. Long ago, President Obama signed the U.S. Commercial Space Launch Competitiveness Act which seemed to make asteroid mining legal. The key part of the law is that “Any asteroid resources obtained in outer space are the property of the entity that obtained them, which shall be entitled to all property rights to them, consistent with applicable federal law and existing international obligations.” More concisely, the law makes it so that asteroid mining by U.S. citizens would not violate U.S. law.

While this would seem to open the legal doors to asteroid mining, there are still legal barriers, although the law is obviously make-believe and requires that people either are willing to follow it or the people with guns are willing to shoot people for not following it. Various space treaties, such as the Outer Space Treaty of 1967, do not give states sovereign rights in space. As such, there is no legal foundation for a state to confer space property rights to its citizens based on its sovereignty. However, the treaties do not seem to forbid private ownership in space—as such, any other nation could pass a similar law that allows its citizens to own property in space without violating the laws of that nation. Obviously enough, satellites are owned by private companies and this could set a precedent for owning asteroids, depending on how clever the lawyers are.

One concern is that if several nations pass such laws and people start mining asteroids, then conflict over valuable space resources will be all but inevitable. In some ways this will be a repeat of the past: the more technologically advanced nations engaged in a struggle to acquire resources in an area where they lack sovereignty. These past conflicts tended to escalate into wars, which is something that must be considered in the final frontier.

One way to try to avoid war over asteroids is new treaties governing space mining. This is, obviously enough, a matter that will be handled by space lawyers, governments, and corporations. Unless, of course, AI kills us all first. Then they can sort out asteroid mining.

While the legal aspects of space ownership are interesting, its moral aspects of ownership are also of concern. While it might be believed that property rights in space are entirely new, this is not the case. While the setting is different than in the original, the matter of space property matches the state of nature scenarios envisioned by thinkers like Hobbes and Locke. To be specific, there is an abundance of resources and an absence of authority. As it now stands, while no one can hear you scream in space, there is also no one who can arrest you for space piracy as long as you stay in space.

Using the state of nature model, it can be claimed that there are currently no rightful owners of the asteroids, or it could be claimed that we are all the rightful owners (the asteroids are the common property of all of humanity). 

If there are currently no rightful owners, then the asteroids are there for the taking: an asteroid belongs to whoever can take and hold it. This is on par with Hobbes’ state of nature—practical ownership is a matter of possession. As Hobbes saw it, everyone has the right to all things, but this is effectively a right to nothing—other than what a person can defend from others. As Hobbes noted, in such a scenario profit is the measure of right and who is right is to be settled by the sword.

While this is practical, brutal and realistic, it is a bit problematic in that it would, as Hobbes also noted, lead to war. His solution, which would presumably work as well in space as on earth, would be to have sovereignty in space. This would shift the war of all against all in space (of the sort that is common in science fiction about asteroid mining) to a war of nations in space (which is also common in science fiction). The war could, of course, be a cold one fought economically and technologically rather than a hot one fought with mass drivers and lasers.

If asteroids are regarded as the common property of humanity, then Locke’s approach could be taken. As Locke saw it, God gave everything to humans in common, but people must acquire things from the common property to make use of it. Locke gives a terrestrial example of how a person needs to make an apple her own before she can benefit from it. In the case of space, a person would need to make an asteroid her own to benefit from the materials it contains.

Locke sketched out a basic labor theory of ownership—whatever a person mixes her labor with becomes her property. As such, if asteroid miners located an asteroid and started mining it, then the asteroid would belong to them.  This does have some appeal: before the miners start extracting the minerals from the asteroid, it is just a rock drifting in space. Now it is a productive mine, improved from its natural state by the labor of the miners. If mining is profitable, then the miners would have a clear incentive to grab as many asteroids as they can, which leads to the moral problem of the limits of ownership.

Locke does set limits on what people can take in his proviso: those who take from the common resources must leave as much and as good for others. When describing this to my students, I always use an analogy to a party: since the food is for everyone, everyone has a right to the food. However, taking it all or taking the very best would be wrong (and rude). While this proviso is ignored on earth, the asteroids could provide us with a fresh start in terms of dividing up the common property of humanity. After all, no one has any special right to claim the asteroids—so we all have equal good claims to the resources they contain.

As with earth resources, some will contend that there is no obligation to leave as much and as good for others in space. Instead, those who get there first will contend that ownership should be on the principle of whoever grabs it first and can keep it is the “rightful” owner. Unless someone grabs it from them, then they would presumably see that as a cruel injustice.

Those who take this view would probably argue that those who get their equipment into space would have done the work (or put up the money) and (as argued above) would be entitled to all they can grab and use or sell. Other people are free to grab what they can, if they have access to the resources needed to mine the asteroids. Naturally, the folks who lack the resources to compete will remain poor—their poverty will, in fact, disqualify them from owning any of the space resources much in the way poverty effectively disqualifies people on earth from owning earth resources.

While the selfish approach will be appealing to those who can grab the asteroids, arguments can be made for sharing them. One reason is that those who will mine the asteroids did not create the means to do so from nothing. Reaching the asteroids will be the result of centuries of human civilization that made such technology possible. As such, there would seem to be a general debt owed to human civilization and paying this off would involve also contributing to the general good of humanity. Naturally, this line of reasoning can be countered by arguing that the successful miners will benefit humanity when their profits “trickle down” from space. Sadly, as on earth, gravity does not seem to affect money in terms of trickling it down. It always seems to go upwards.

Another way to argue for sharing the resources is to use an analogy to a buffet line. Suppose I am first in line at a buffet. This does not give me the right to devour everything I can with no regard for the people behind me. It also does not give me the right to grab whatever I cannot eat myself to sell it to those who had the misfortune to be behind me in line. As such, these resources should be treated in a similar manner, namely fairly and with some concern for those who are behind the first people in line.

Naturally, these arguments for sharing can be countered by the usual arguments in favor of selfishness. While it is tempting to think that the vastness of space will overcome selfishness (that is, there will be so much that people will realize that not sharing would be absurd and petty), this seems unlikely—the more there is, the greater the disparity is between those who have and those who have not. On this pessimistic view we already have all the moral and legal tools we need for space—it is just a matter of changing the wording a bit to include “space.”

While the problem of other minds is an epistemic matter (how does one know that another being has a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regard to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.

Over the centuries philosophers have proposed various theories of mind, and it is interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (except for functionalism) were developed to provide accounts of the minds of biological creatures.

One classic theory of mind is identity theory.  This a materialist theory of mind in which the mind is composed of matter.  What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.

If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.

Perhaps the best-known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.

While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.

That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.

In terms of the problem of other minds, there is the serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance, and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.

While substance dualism is the best-known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct but are not different ontological substances.

Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way: mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.

This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two-way causation. It also seems to have the defect of making mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.

As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weird than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.

A second type of property dualism is interactionism. As the name indicates, this is the theory that mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism its does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.

As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor, the assertion that the view should not be dismissed from mere organic prejudice.

The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regard to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body. 

While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.

For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same require functional rather than physical identity.  As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different but are functionally the same in that they can run Windows and Windows software (and Linux, of course).

As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind could be a plausible account of machine minds.

If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.