
The United States military has expressed interest in developing robots capable of moral reasoning and has provided grant money to some well-connected universities to address this problem (or to at least create the impression that the problem is being considered).
The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, the killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of the constraints of ethics.
While there are various reasons to imbue (or limit) robots with ethics (or at least engage in the pretense of doing so), one of these is public relations. Thanks to science fiction dating back at least to Frankenstein, people tend to worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics serves to reassure the public (or so it is hoped). Another reason is to make the public relations gimmick a reality—to actually place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds.
While science fiction features ethical robots, the authors (like philosophers who discuss the ethics of robots) are extremely vague about how robot ethics actually works. In the case of truly intelligent robots, their ethics might work the way our ethics works—which is something that is still a mystery debated by philosophers and scientists to this day. We are not yet to the point of having such robots, so the current practical challenge is to develop ethics for the sort of autonomous or semi-autonomous robots we can build now.
While creating ethics for robots might seem daunting, the limitations of current robot technology means that robot ethics is essentially a matter of programming these machines to operate in specific ways defined by whatever ethical system is being employed as the guide. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I regard shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot her.
While a suitably programmed robot would act in a way that seemed ethical, the robot is obviously not engaged in ethical behavior. After all, it is merely a more complex version of the automatic door. The supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil. Likewise, the killbot that does not shoot you in the face because its cameras show that you are unarmed is not ethical. The killbot that chops you into meaty chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.
To be fair to the killbots, perhaps we humans are not ethical or unethical under these requirements for ethics—we could just be meat-bots operating under the illusion of ethics. Also, it is certainly sensible to focus on the practical aspect of the matter: if you are a civilian being targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you or not. As such, the general practical problem is getting our killbots to behave in accord with our ethical values.
Achieving this goal involves three main steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will presumably involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating the ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior. This would require creating a definition of civilian (or perhaps just an unarmed person) that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize. The third step would be coding that behavior in whatever programming language is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender so as to get close enough to destroy the killbot).
Since these robots would be following programmed rules, they would presumably be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.
An interesting practical question is whether or not the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy enough to imagine an ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).
The largest impact of the government funding for this sort of research will be that properly connected academics will get surprisingly large amounts of cash to live the science-fiction dream of teaching robots to be good. That way the robots will feel a little bad when they kill us all.
Robots Cannot Have “Ethics,” By Definition
The robot, of course, couldn’t, by definition, be “ethical” as the robot has no self-interest, no “final end,” according to Aristotle–it’s just a contraption like a mouse-trap or rubber-band, or paper-clip, and it can only follow or serve its program whatever that may be. Hence the only possible “ethic” of the robot is to follow instructions/program, and if it can’t do this then its “ethic” is defective.
The human only has ethics (the logic btwn ends and means) as it has will, hence ends: pursuit of life, liberty, and happiness; thus the human considers the means and values thereof given the end to be pursued or chosen.
And the human problems of ethics usually have to do simply w. long-term vs. short-term, there usually being a kind of matrix of ends in effect at any given moment. Thus the horse-and-buggy gave way to the automobile on the general ethic of efficiency/economy–so will go the “ethics” of robots.
Robots cannot have ethics as robots have no will, merely a program installed at the will of the programmer. Ethic of a robot is merely the ethic of the programmer/builder–no diff. fm paper-clip, or rubber-band, etc.
Hence any talk about “ethics” of robots could only be propaganda and “public relations” of arms-merchants and weapons-manufacturers trying to make sales, as usual. And as usual, any “ethics” will always be kept something general, abstract, and un-defined–no less than it is nowadays in colleges and universities, “ethics” merely being synonym for OBEDIENCE to whatever.
Consider Immanuel Kant (see http://en.wikipedia.org/wiki/Immanuel_Kant), who is nowadays upheld as great ethical exemplar for all students of philosophy: he said (a) one should be “ethical” and chose a metaphysics appropriately–hence non-determinism, hence subjective (in accord w. “free” will), hence mystic, hence anti-philosophical. For it’s “good” to doing so (choosing non-determinism)–why?–Kant doesn’t say–one should merely be good–out of duty.
(b) So why then is “duty” good?–or why should one follow “duty”?–Kant doesn’t say–one just “should,” ho ho ho. Kant then goes on: (c) WHAT is “duty”?–ho ho ho. (d) “Duty” is anti-selfishness–why?–Kant doesn’t say; it just is, u see, ho ho ho ho ho. What a joke.
(e) Kant then goes on to saying duty is in accord w. the “categorical imperative”–why?–Kant doesn’t say. (f) And what is the “categorical imperative”?–Kant doesn’t say, but rather merely describes it–acting in such way as to make it “universal”–why?–Kant doesn’t say, ho ho ho ho ho. Observe the comical series of question-begging (un-founded assertions) and redundant, circular logic, only couched in pretentious terminology and high-flown phraseology–such was the method of Kant which took the world by storm, ho ho ho ho ho.
And if u think this “ethics” of Immanuel Kant is impressive, ck-out his epistemology, ho ho ho ho–equally entertaining and impervious, but perfectly consonant and consistent for absurdity.
And such is Immanuel Kant, truly the tongue-in-cheek Aristophanes of our modern age, a perfect robot of emptiness, incoherence, and absurdity, but most of all COMEDY–whom college professors revere, the biggest joke being upon students, the citizens, and tax-payers. No wonder people today are so stupid and suicidal, the economy doomed, Agenda-21 genocide upheld as ideal.