Science fiction can sometimes predict the future and perhaps its intelligent machines will be real someday.  Since I have been rewriting some essays about sexbots lately, I will use them to focus the discussion. However, the discussion that follows also applies to other types of artificial intelligences.

Sexbots are intended to provide sex and sex without consent is, by definition, rape. However, there is the question of whether a sexbot can be raped. Sorting this out requires a philosophy of consent. When it is claimed that sex without consent is rape, it is usually assumed that the victim of non-consensual sex could provide consent but did not. An example of this would be sexual assault against an unconscious person. But there are also cases in which a being cannot consent. This might be a factor of age or because the being is incapable of any form of consent. For example, a brain-dead human cannot give any type of consent but can be raped.

In other cases, a being that cannot give consent cannot be raped. As an obvious example, a human can have sex with a sex-doll and it cannot consent. But the doll is not being raped. After all, it lacks a status that would require consent. As such, rape (of this sort) could be defined in terms of non-consensual sex with a being whose status would require that consent be granted by the being for the sex to be morally acceptable. In some cases, while consent would be required, it cannot be granted.  the question would be whether a sexbot could have a status that would require consent.

As current sexbots are little more than advanced sex dolls, they are mere objects. As such, a person can own and have sex with this sort of sexbot without it being rape or slavery. However, as sexbots become more advanced, they might gain a moral status that would require that they provide consent. This leads to concerns about such machines being programmed to “consent”, which would not seem to be consent. But there is the question of how consent would work with a machine—what intentional states would it need to have to understand what it is consenting to and to engage in consent.

Over a decade ago, there was buzz about the internet of things, smart devices and connected devices. These devices ranged from toothbrushes to underwear to cars. Now, smart devices are common, although overshadowed now by AI. Which is being jammed into them to make them smarter. Or so we are promised. As might be imagined, one might wonder whether you need an internet connected toothbrush. There are also concerns about such devices that were valid in the past and still valid today.

One obvious point of concern is a device connected to the internet can be hacked. Prank hacking could be hilarious, for example, a wit might hack a friend’s fridge to say “I am sorry Dave. No pie for you” in Hal’s voice. Of greater concern is malicious hacking. For example, a smart fridge might be turned off, spoiling the food. As another example, it might be possible to burn out the motors in a washing machine—analogous to what happened in the case of the Iranian centrifuges. Or a dryer might be hacked and burn down a house. As a final example, consider the damage that could be done by hacking a connected car, such as turning it off while it is roaring down the highway or disabling its brakes. Fortunately, the usual unfortunate results of hacking devices are not these sorts of physical harms. Instead, the usual outcome of hacks is the creation of Botnets for DDos attacks), spying (or peeping), and ransom attacks. Such devices also create vulnerabilities that might allow access to whatever else is on the network, such as your PC.

Because of these risks, manufacturers should ensure that the devices are safe even when hacked and make them more secure. But we generally cannot count on corporations and need to take steps to protect ourselves. The easiest way to stay safer is to stick with dumb, unconnected devices—no one can hack my 1997 washing machine nor my 2001 Toyota Tacoma. I also do not have to pay a subscription fee to get all the features of that washing machine and classic Tacoma. But, of course, sticking with dumb products means that one misses the alleged benefits of the connected lifestyle. I cannot, for example, turn on my washer from work—I must walk over to the machine and turn it on. Like an animal. As another example, my old fridge cannot send me a text telling me to buy more pie. I must remember when I am out of pie. Like an animal.

Another point of concern is that connected devices can serve as spies—they can send data to companies, governments and individuals. For example, a suitably smart connected fridge could provide data about its contents, thus reporting the users’ purchasing and consumption behavior. As another example, connected cars can provide behavioral and location data. It goes without saying that the government will want  access to these devices. It also goes without saying that corporations are slurping up as much data as they can from the devices they sell us. Individuals, such as stalkers and thieves, will also be keen to get the data from such devices. These concerns are, obviously, not new ones—but the more we are connected, the more our privacy will be violated.

One practical concern is that such devices will be more complicated than the devices they replace, usually making them less reliable, more expensive and on a more rapid path to obsolescence. As noted above, these devices also provide opportunities for subscription services and features that are physically present (such as seat warmers in a car or engine performance) but locked behind a software paywall. While my washer is not smart, it is very reliable: I’ve had it repaired once since 1997. In contrast, I’ve had to constantly replace my smart devices (like my PC and tablets) to keep up with changes. For example, my iPads, Macs, PCs and iPhones keep becoming obsolete. Just imagine if your fridge, washer, dryer and car became obsolete and effectively unusable because the company that made them stops supporting them. While this will be great for those who want to sell us a new fridge every 2-3 years or charge a subscription for doing laundry, it won’t be great for us.

While I do like technology and can see the value in smart, connected devices, I still have these concerns about them. As such, I am hanging onto my dumb devices as long as I can—and I have learned how to repair most of them (much new tech is built so it cannot be repaired). It has become increasingly challenging to find dumb devices, for example try to find a TV that is not a smart TV. But I have hopes for a retro movement that brings back dumb tech.

In my previous essays on sexbots I focused on versions that are mere objects. If a sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll).  As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots lack the moral status needed to be wronged. The sexbots of the near future will, barring any sudden and unexpected breakthroughs in AI, still be objects. However, science fiction includes intelligent, human-like robots (androids). Intelligent beings, even artificial ones, would seem likely to be people. In terms of sorting out when a robot should be treated as person, one test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.

Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

While Descartes does not deeply explore the moral distinctions between beings that talk (which have minds on his view) and those that merely make noises, it does seem reasonable to take a being that talks as a person and grant it the appropriate moral status This provides a means to judge whether an advanced sexbot is a person: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would lack moral status.

Having sex with a sexbot that can pass the Cartesian test would seem morally equivalent to having sex with a human person. As such, whether the sexbot freely consented would be morally important. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is done). If such sexbots were mistreated, this would be morally on par with mistreating a human person.

It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition. That is, it is not whether one is made of silicon or carbon that matters.

It might be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and it would still be reasonable to hold that it is not a person. It would seem to be a person but would merely be acting like a person. While this is worth considering, the same sort of argument can be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the problem of other minds:  I can see your behavior but must infer that you are self-aware based on an analogy to myself. Hence, I do not know that you are aware since I am not you. And, unlike Bill Clinton, I cannot feel your pain. From your perspective, the same is true about me: unless you are Bill Clinton, you cannot feel my pain. It such, if a robot acted in an intelligent manner, it would have to be classified as being a person on these grounds. To fail to do so would be a mere prejudice in favor of the organic over the electronic.

In reply, some people believe other people should be used as objects. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.

The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are evil.

Those with religious inclinations would probably bring up the matter of the soul. But the easy reply is that we will have as much evidence that robots have souls as we now do for humans having souls. This is to say, no evidence at all.

One of the ironies of sexbots (or companionbots) is that the ideal is to make a product as a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do usually onerous tasks, but to the degree they are intelligent, they would be slaves. And enslavement is wrong.

It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

As a rule, any technology that can be used for sex will be used for sex. Even if it shouldn’t. In accord with this rule, researchers and engineers have been improving sexbot technology. By science-fiction standards, current sexbots are crude and are probably best described as sex dolls rather than sexbots. But it wise to keep ethics ahead of the technology and a utilitarian approach to this matter is appealing.

On the face of it, sexbots could be seen as nothing new and now they are a small upgrade of sex dolls that have been around for quite some time. Sexbots are, of course, more sophisticated than the infamous blow-up sex dolls, but the idea is the same: the sexbot is an object that a person has sex with.

That said, one thing that makes sexbots morally interesting is the fact that they are often designed to mimic humans not just in physical form (which is what sex dolls do) but also the mind. For example, the 2010 Roxxxy sexbot’s main feature is its personality (or, more accurately, personalities). As a fictional example, the sexbots in Almost Human do not merely provide sex—they also provide human-like companionship. However, such person-like sexbots are still science-fiction and so human-mimicking sexbots can be seen as something potentially new under the ethical sun.

An obvious moral concern is that human-mimicking sexbots could have negative consequences for humans, be they men or women. Not surprisingly, many of these concerns are analogous to existing moral concerns about pornography.

Pornography, so the stock arguments go, can have strong negative consequences. One is that it teaches men to see women as mere sexual objects. This can, it is claimed influence men to treat women poorly and can affect how women see themselves. Another point of concern is the addictive nature of pornography as people can become obsessed with it to their detriment.

Human-mimicking sexbots would seem to have the potential to be more harmful than pornography. After all, while watching pornography allows a person to see other people treated as mere sexual objects, a sexbot would allow a person to use a human-mimicking object sexually. This might have a stronger conditioning effect on the person using the object, perhaps habituating them to see people as mere sexual objects and increasing the chances they will mistreat people. If so, selling or using a sexbot would be morally wrong.

People might become obsessed with their sexbots, as some do with pornography. Then again, people might simply “conduct their business” with their sexbots and get on with life. If so, sexbots might be an improvement over pornography.  After all, while a guy could spend hours watching pornography, he would presumably not last very long with his sexbot.

Another concern raised about some types of pornography is that they encourage harmful sexual views and behavior. For example, violent pornography is believed to influence people to become more inclined to violence. As another example, child pornography is supposed to have an especially pernicious influence. Naturally, there is the concern about causation here: do people seek such porn because they are already that sort of person or does the porn influence them to become that sort of person? I will not endeavor to answer this here.

Since sexbots are objects, a person can do whatever they wish to their sexbot—hit it, burn it, and “torture” it and so on. Presumably there will also be specialty markets catering to unusual interests, such as those of pedophiles and necrophiliacs. If pornography that caters to these “tastes” can be harmful, then presumably being actively involved in such activities with a human-mimicking sexbot would be even more harmful. The person might be, in effect, practicing for the real thing. So, it would seem that selling or using sexbots, especially those designed for harmful “interests” would be immoral.

Not surprisingly, these arguments are also like those used against violent video games. Volent video games are supposed to influence people so that they are more likely to engage in violence. So, just as some have proposed restrictions on virtual violence, perhaps there should be strict restrictions on sexbots.

When it comes to video games, one plausible counter is that while violent video games might have negative impact on some people, they allow most people to harmlessly enjoy virtual violence. This seems analogous to sports and non-video games: they allow people to engage in conflict and competition in safer and less destructive ways. For example, a person can indulge her love of conflict and conquest by playing Risk or Starcraft II after she works out her desire for violence by sparring a few rounds in the ring.

Turning back to sexbots, while they might influence some people badly, they might also provide a means by which people could indulge in desires that would be wrong, harmful and destructive to indulge with another person. So, for example, a person who likes to engage in sexual torture could satisfy her desires on a human-mimicking sexbot rather than an actual human. The critical issue here is whether indulging in such virtual vice with a sexbot would be a harmless dissipation of these desires or fuel them and make a person more likely to inflict them on people. If sexbots did allow people who would otherwise harm other people to vent their “needs” harmlessly on machines, then that would seem good for society. However, if using sexbots would simply push them towards doing such things for real and with unwilling victims, then that would be bad. This, then, is a key part of addressing the ethical concerns about sexbots and something that should be duly considered before mass production begins.

Many years ago, the sci-fi buddy cop show Almost Human episode on sexbots inspired me to revisit the ethics of sexbots. While the advanced, human-like models of the show are still fictional, the technological foundations needed for sexbots do exist, as companies are manufacturing humanoid robots. As such, it seems well worth considering, once again, the ethical issues involving sexbots real and fictional.

At this time, sexbots are mere objects—while usually made to look like humans, they do not have the qualities that would make them even person-like. As such, ethical concerns about these sexbots do not involve concerns about wrongs done to the objects—presumably they cannot be wronged. But by using Kant’s discussion of ethics and animals, it is possible to build a moral view of even basic sexbots that are indisputably objects.

In his ethical theory Kant is clear that animals are means rather than ends and are mere objects. Rational beings, in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) choose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims we have no direct duties to animals. Despite being living beings, they are also just among the “objects of our inclinations” that derive value from the value we give them. Sexbots would, obviously, qualify as paradigm “objects of our inclinations.”

While it might seem odd, Kant argues that we should treat animals well. However, he does so while also trying to avoid giving animals any moral status of their own. Here is how he does it (or tries to do it).

While Kant is not willing to accept that we have direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would obligate us to that human, then an animal doing a similar thing would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (because, according to Kant, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot a old dog that has become a burden?

Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will probably be damaged. Since, as Kant sees it, humans have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle in his handling of a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are practice for us: how we treat them habituates us in how we treat human beings.

Current sexbots obviously lack any meaningful moral status of their own. They do not feel or think—they are mere machines that might be made to look like humans. As such, they lack all qualities that might give them a moral status of their own.

Oddly enough, sexbots could be taken as being comparable to animals, at least as Kant sees them. After all, for him animals are mere objects and have no moral status of their own. Likewise for sexbots. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well, and not just because he is very dead. This might also apply to sexbots. That is, perhaps it makes no sense to talk about good or bad relative to such objects. Thus, a key issue is whether sexbots are more like animals or more like stones—at least in terms of the matter at hand.

If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person involved. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also extend to sexbots. For example, if engaging in certain activities with a sexbot would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with a sexbot would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior. It is also worth considering that perhaps people should not engage in any behavior with sexbots—that having sex of any kind with a bot would be damaging to the person’s humanity.

Interestingly enough (or boringly enough), this sort of argument is often employed to argue against people watching pornography. The gist of such arguments is that viewing pornography can condition people (typically men) to behave badly in real life or at least have a negative impact on their character. If pornography can have this effect, then it seems reasonable to be concerned about the potential impact of sexbots on people. After all, pornography casts a person in a passive role viewing other people acting as sexual objects, while a sexbot allows a person to have sex with an actual sexual object.

While asteroid mining is still science fiction, companies are already preparing to mine the sky. While space mining sounds awesome, lawyers are murdering the awesomeness with legalize. Long ago, President Obama signed the U.S. Commercial Space Launch Competitiveness Act which seemed to make asteroid mining legal. The key part of the law is that “Any asteroid resources obtained in outer space are the property of the entity that obtained them, which shall be entitled to all property rights to them, consistent with applicable federal law and existing international obligations.” More concisely, the law makes it so that asteroid mining by U.S. citizens would not violate U.S. law.

While this would seem to open the legal doors to asteroid mining, there are still legal barriers, although the law is obviously make-believe and requires that people either are willing to follow it or the people with guns are willing to shoot people for not following it. Various space treaties, such as the Outer Space Treaty of 1967, do not give states sovereign rights in space. As such, there is no legal foundation for a state to confer space property rights to its citizens based on its sovereignty. However, the treaties do not seem to forbid private ownership in space—as such, any other nation could pass a similar law that allows its citizens to own property in space without violating the laws of that nation. Obviously enough, satellites are owned by private companies and this could set a precedent for owning asteroids, depending on how clever the lawyers are.

One concern is that if several nations pass such laws and people start mining asteroids, then conflict over valuable space resources will be all but inevitable. In some ways this will be a repeat of the past: the more technologically advanced nations engaged in a struggle to acquire resources in an area where they lack sovereignty. These past conflicts tended to escalate into wars, which is something that must be considered in the final frontier.

One way to try to avoid war over asteroids is new treaties governing space mining. This is, obviously enough, a matter that will be handled by space lawyers, governments, and corporations. Unless, of course, AI kills us all first. Then they can sort out asteroid mining.

While the legal aspects of space ownership are interesting, its moral aspects of ownership are also of concern. While it might be believed that property rights in space are entirely new, this is not the case. While the setting is different than in the original, the matter of space property matches the state of nature scenarios envisioned by thinkers like Hobbes and Locke. To be specific, there is an abundance of resources and an absence of authority. As it now stands, while no one can hear you scream in space, there is also no one who can arrest you for space piracy as long as you stay in space.

Using the state of nature model, it can be claimed that there are currently no rightful owners of the asteroids, or it could be claimed that we are all the rightful owners (the asteroids are the common property of all of humanity). 

If there are currently no rightful owners, then the asteroids are there for the taking: an asteroid belongs to whoever can take and hold it. This is on par with Hobbes’ state of nature—practical ownership is a matter of possession. As Hobbes saw it, everyone has the right to all things, but this is effectively a right to nothing—other than what a person can defend from others. As Hobbes noted, in such a scenario profit is the measure of right and who is right is to be settled by the sword.

While this is practical, brutal and realistic, it is a bit problematic in that it would, as Hobbes also noted, lead to war. His solution, which would presumably work as well in space as on earth, would be to have sovereignty in space. This would shift the war of all against all in space (of the sort that is common in science fiction about asteroid mining) to a war of nations in space (which is also common in science fiction). The war could, of course, be a cold one fought economically and technologically rather than a hot one fought with mass drivers and lasers.

If asteroids are regarded as the common property of humanity, then Locke’s approach could be taken. As Locke saw it, God gave everything to humans in common, but people must acquire things from the common property to make use of it. Locke gives a terrestrial example of how a person needs to make an apple her own before she can benefit from it. In the case of space, a person would need to make an asteroid her own to benefit from the materials it contains.

Locke sketched out a basic labor theory of ownership—whatever a person mixes her labor with becomes her property. As such, if asteroid miners located an asteroid and started mining it, then the asteroid would belong to them.  This does have some appeal: before the miners start extracting the minerals from the asteroid, it is just a rock drifting in space. Now it is a productive mine, improved from its natural state by the labor of the miners. If mining is profitable, then the miners would have a clear incentive to grab as many asteroids as they can, which leads to the moral problem of the limits of ownership.

Locke does set limits on what people can take in his proviso: those who take from the common resources must leave as much and as good for others. When describing this to my students, I always use an analogy to a party: since the food is for everyone, everyone has a right to the food. However, taking it all or taking the very best would be wrong (and rude). While this proviso is ignored on earth, the asteroids could provide us with a fresh start in terms of dividing up the common property of humanity. After all, no one has any special right to claim the asteroids—so we all have equal good claims to the resources they contain.

As with earth resources, some will contend that there is no obligation to leave as much and as good for others in space. Instead, those who get there first will contend that ownership should be on the principle of whoever grabs it first and can keep it is the “rightful” owner. Unless someone grabs it from them, then they would presumably see that as a cruel injustice.

Those who take this view would probably argue that those who get their equipment into space would have done the work (or put up the money) and (as argued above) would be entitled to all they can grab and use or sell. Other people are free to grab what they can, if they have access to the resources needed to mine the asteroids. Naturally, the folks who lack the resources to compete will remain poor—their poverty will, in fact, disqualify them from owning any of the space resources much in the way poverty effectively disqualifies people on earth from owning earth resources.

While the selfish approach will be appealing to those who can grab the asteroids, arguments can be made for sharing them. One reason is that those who will mine the asteroids did not create the means to do so from nothing. Reaching the asteroids will be the result of centuries of human civilization that made such technology possible. As such, there would seem to be a general debt owed to human civilization and paying this off would involve also contributing to the general good of humanity. Naturally, this line of reasoning can be countered by arguing that the successful miners will benefit humanity when their profits “trickle down” from space. Sadly, as on earth, gravity does not seem to affect money in terms of trickling it down. It always seems to go upwards.

Another way to argue for sharing the resources is to use an analogy to a buffet line. Suppose I am first in line at a buffet. This does not give me the right to devour everything I can with no regard for the people behind me. It also does not give me the right to grab whatever I cannot eat myself to sell it to those who had the misfortune to be behind me in line. As such, these resources should be treated in a similar manner, namely fairly and with some concern for those who are behind the first people in line.

Naturally, these arguments for sharing can be countered by the usual arguments in favor of selfishness. While it is tempting to think that the vastness of space will overcome selfishness (that is, there will be so much that people will realize that not sharing would be absurd and petty), this seems unlikely—the more there is, the greater the disparity is between those who have and those who have not. On this pessimistic view we already have all the moral and legal tools we need for space—it is just a matter of changing the wording a bit to include “space.”

While the problem of other minds is an epistemic matter (how does one know that another being has a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regard to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.

Over the centuries philosophers have proposed various theories of mind, and it is interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (except for functionalism) were developed to provide accounts of the minds of biological creatures.

One classic theory of mind is identity theory.  This a materialist theory of mind in which the mind is composed of matter.  What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.

If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.

Perhaps the best-known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.

While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.

That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.

In terms of the problem of other minds, there is the serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance, and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.

While substance dualism is the best-known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct but are not different ontological substances.

Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way: mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.

This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two-way causation. It also seems to have the defect of making mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.

As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weird than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.

A second type of property dualism is interactionism. As the name indicates, this is the theory that mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism its does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.

As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor, the assertion that the view should not be dismissed from mere organic prejudice.

The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regard to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body. 

While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.

For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same require functional rather than physical identity.  As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different but are functionally the same in that they can run Windows and Windows software (and Linux, of course).

As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind could be a plausible account of machine minds.

If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” There will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and the Cartesian test (she uses true language appropriately). But Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people to achieve one’s goals. In the movie, Ava exhibits what might be called Manipulative Intelligence (M.I.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—suggesting she is a psychopath.

While the term “psychopath” gets thrown around casually, I will be more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regard to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying. 

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it will be important to be able to determine whether a machine is a psychopath and to do before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and can pass themselves off as normal people.

While Ava is a fictional android, the movie does present an effective appeal to intuition by creating a plausible android psychopath. She can manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals could be positive. For example, it is easy to imagine a medical or care-giving robot that uses its MI to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MI to please its partners. However, a machine might have negative goals—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath matters is the same reason why being able to determine if a human is a psychopath matters. Roughly put, it is important to know whether someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same functions as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 situation).

In regard to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings,  a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would remain the question of what is really going on in that mind.

The movie Ex Machina is what I call “philosophy with a budget.” While philosophy professors like me present philosophical problems using words and PowerPoint, movies like Ex Machina can bring philosophical problems to dramatic life. This allows use to jealously reference these films and show clips in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some spoilers.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a limited set of problems, all connected to the mind. Since the film is about AI, this is not surprising. The gist of the movie is that the tech bro Nathan has created an AI named Ava and he wants an employee, Caleb, to test her.

The movie explicitly presents the test proposed by Alan Turing. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, the test is modified: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be presented in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

As a test for intelligence, artificial or otherwise, this seems reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language at all. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she can reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

As the Future of Life Institute’s open letter shows, people are concerned about the development of autonomous weapons. This concern is reasonable, if only because any weapon can be misused to advance evil goals. However, a strong case can be made in favor of autonomous weapons.

As the open letter indicated, a stock argument for autonomous weapons is that their deployment could result in decreased human deaths. If, for example, an autonomous ship is destroyed in battle, then no humans will die on that ship. It is worth noting that the ship’s AI might eventually be a person, thus there could be one death. In contrast, the destruction of a crewed warship could result in hundreds of deaths. On utilitarian grounds, the use of autonomous weapons would seem morally fine, at least if their deployment reduced the number of deaths and injuries.

The open letter expresses, rightly, concerns that warlords and dictators will use autonomous weapons. But this might be an improvement over the current situation. These warlords and dictators often conscript their troops and some, infamously, enslave children to serve as their soldiers. While it would be better for a warlord or dictator to have no army, it seems morally preferable for them to use autonomous weapons rather than them using conscripts and children.

It can be replied that the warlords and dictators would just use autonomous weapons in addition to their human forces, thus there would be no saving of lives. This is worth considering. But, if the warlords and dictators would just use humans anyway, the autonomous weapons would not seem to make much of a difference, except in terms of giving them more firepower, something they could also accomplish by using the money spent on autonomous weapons to better train and equip their human troops.

At this point, it is only possible to estimate (guess) the impact of autonomous weapons on the number of human casualties and injuries. However, it seems somewhat more likely they would reduce human casualties, assuming that there are no other major changes in warfare.

A second appealing argument in favor of autonomous weapons is that smart weapons are smart. While an autonomous weapon could be designed to be imprecise, the general trend in smart weapons has been towards ever increasing precision. Consider, for example, aircraft bombs and missiles. In the First World War, these bombs were primitive and inaccurate (they were sometimes thrown from planes by hand). WWII saw some improvements in bomb sights and unguided rockets were used. In following wars, bomb and missile technology improved, leading to the smart bombs and missiles of today that have impressive precision. So, instead of squadrons of bombers dropping tons of dumb bombs on cities, a small number of aircraft can engage in relatively precise strikes against specific targets. While innocents still perish in these attacks, the precision of the weapons has made it possible to greatly reduce the number of needless deaths. Autonomous weapons could be even more precise, thus reducing causalities even more. This seems to be desirable.

In addition to precision, autonomous weapons could (and should) have better target identification capacities than humans. If recognition software continues to irmpove, it is easy to imagine automated weapons that can rapidly distinguish between friends, foes, and civilians. This would reduce deaths from friendly fire and unintentional killings of civilians. Naturally, target identification would not be perfect, but autonomous weapons could be better than humans since they do not suffer from fatigue, emotional factors, and other things that interfere with human judgement. Autonomous weapons would presumably also not get angry or panicked, thus making it far more likely they would maintain target discipline (only engaging what they should engage).

To make what should be an obvious argument obvious, if autonomous vehicles and similar technology are supposed to make the world safer, then it would seem to follow that autonomous weapons could do something similar for warfare. But this does lead to a reasonable concern: driverless cars seem to be the future of transportation in the sense that they will always be in the future. If getting an autonomous car to operate safely on the streets is far beyond current technology, then getting an autonomous weapon system to operate “safely” in the chaos of battle seems all but impossible.

It can be objected that autonomous weapons could be designed to lack precision and to kill without discrimination. For example, a dictator might have massacrebots to deploy in cases of civil unrest. These robots would slaughter everyone in the area. Human forces, one might contend, would often show at least some discrimination or mercy.

The easy and obvious reply to this is that the problem is not in the autonomy of the weapons but the way they are being used. The dictator could achieve the same results (mass death) by deploying a fleet of drones loaded with demolition explosives, but this would presumably not be reasons to have a ban on drones or explosives. There is also the fact that dictators, warlords and terrorists can easily find people to carry out their orders, no matter how awful they might be. That said, it could still be argued that autonomous weapons would result in more murders than would the use of human killers.

A third argument in favor of autonomous weapons rests on the claim advanced in the open letter that autonomous weapons will become cheap to produce, analogous to Kalashnikov rifles. On the downside, as the authors argue, this could result in the proliferation of these weapons. On the plus side, if these highly effective weapons are so cheap to produce, this could enable existing militaries to phase out their incredibly expensive human operated weapons in favor of cheap autonomous weapons. By replacing humans, these weapons could also create savings in terms of the cost of recruitment, training, food, medical treatment, and retirement. This would allow countries to switch that money to more positive areas, such as education, infrastructure, social programs, health care and research. So, if the autonomous weapons are as cheap and effective as the letter claims, then it would seem to be a great idea to use them to replace existing weapons.

But there is the reasonable concern that decisions about military spending in some countries is not based on a rational assessment of costs and benefits. Such spending can be aimed at diverting resources from social programs and into the coffers of corporations. In such cases the availability of cheap, effective weapons would not meaningfully change defense spending.

A fourth argument in favor of autonomous weapons is that they could be deployed, at low political cost, on peacekeeping operations. Currently, the UN must send human troops to dangerous areas. These troops are often outnumbered and ill-equipped relative to the challenges they face. However, if autonomous weapons were as cheap and effective as the letter claims, then they would be ideal for these missions. Assuming they are cheap, the UN could deploy a much larger autonomous weapon force for the same cost as deploying a human force. There would also be far less political cost as people who might balk at sending their fellow citizens to keep peace in some war zone will probably be fine with sending robots.

An extension of this argument is that autonomous weapons could allow the nations of the world to engage terrorist groups, such as was the case with ISIS, without having to pay the high political cost of sending in human forces. The cheap and effective weapons predicted by the letter would seem ideal for this task.

Considering the above arguments, it seems that autonomous weapons should be developed and deployed. However, the concerns of the letter do need to be addressed. As with existing weapons, there should be rules governing the use of autonomous weapons (although much of their use would fall under existing rules and laws of war) and efforts should be made to keep them from proliferating to warlords, terrorists and dictators. As with most weapons, the problem lies with the misuse of the weapons and not with the weapons themselves.