Before the Trump regime, the United States miliary expressed interest in developing robots capable of moral reasoning and provided grant money to support such research. Other nations are no doubt also interested.  

The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of moral constraints.

While there are various reasons to imbue robots with ethics (or at least pretend to do so), one is public relations. Thanks to science fiction dating at least back to Frankenstein, people worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics might reassure the public. Another reason is to make the public relations gimmick a reality—to place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds. But considering the ethics of war endorsed by the Trump regime, they are probably not interested in ethical war machines.

While science fiction features ethical robots, the authors (like philosophers) are vague about how robot ethics works. In the case of intelligent robots, their ethics might work the way ours does—which is a mystery debated by philosophers and scientists to this day. While AI has improved thanks to massive processing power, it does not have human-like ethical capacity, so the current practical challenge is to develop ethics for the autonomous or semi-autonomous robots we can build now.

While creating ethics for robots might seem daunting, the limitations of current robot technology means robot ethics is a matter of programming these machines to operate in specific ways defined by whatever ethical system is used. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I see shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot them. Sorting out recognizing weapons would be a programming feat, likely with people dying in the process.

While a suitably programmed robot would act in a way that seemed ethical, the robot would not be engaged in ethical behavior. After all, it is merely a more complex version of an automatic door. A supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil.  Likewise, the killbot that does not shoot you because its cameras show you are unarmed is not ethical. The killbot that chops you into chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.

To be fair to killbots, perhaps humans are not ethical or unethical under these requirements—we could just be meat-bots operating under the illusion of ethics. Also, it is sensible to focus on the practical aspect of the matter: if you are targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you. As such, the general practical problem is getting our killbots to behave in accord with our ethical values. Or, in the case of the Trump regime, a lack of ethics.

Achieving this goal involves three steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior relative to civilians. This would require creating a definition of civilian  that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize.  The third step would be coding that behavior in whatever programming  is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers or those typing the prompts into an AI program would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender to get close enough to destroy the killbot).

Since these robots would be following programmed rules, they would seem to be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.

A  practical question is whether the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy to imagine ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).

One impact for this research will be that some people will get to live the science-fiction dream of teaching robots to be good. That way the robots might feel a little bad when they kill us all.

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>