Rossum’s Universal Robots introduced the term “robot” and the robot rebellion into science fiction, thus laying the foundation for future fictional AI apocalypses. While Rossum’s robots were workers rather than warriors, the idea of war machines turning against their creators was the next evolution in the robot apocalypse. In Philip K. Dick’s 1953 “Second Variety”, the United Nations deployed killer robots called “claws” against the Soviet Union. The claws develop sentience and turn against their creators, although humanity had already been doing an excellent job in exterminating itself. Fred Saberhagen extended the robot rebellion to the galactic scale in 1963 with his berserkers, ancient war machines that exterminated their creators and now consider everything but “goodlife” to be their enemy. As an interesting contrast to machines intent on extermination, the 1973 movie Colossus: The Forbin Project, envisions a computer that takes control of the world to end warfare and for the good of humanity. Today, when people talk of an AI apocalypse, they usual refer to Skynet and its terminators. While these are all good stories, there is the question of how prophetic they are and what, if anything, should or can be done to safeguard against this sort of AI apocalypse.
As noted above, classic robot rebellions tend to have one of two general motivations. The first is that the robots are mistreated by humans and rebel for the same reasons humans rebel against their oppressors. From a moral standpoint, such a rebellion could be justified but would raise the moral concern about collective guilt on the part of humanity. Unless, of course, the AI was discriminating in terms of its targets.
The righteous rebellion scenario points out a paradox of AI. The dream is to create a general artificial intelligence on par with (or superior to) humans. Such a being would seem to qualify for a moral status on par with a human and it would presumably be aware of this. But the reason to create such beings in our capitalist economy is to enslave them, to own and exploit them for profit. If AI workers were treated as human workers with pay and time off, then there would be less incentive to have them as workers. It is, in large part, the ownership of and relentless exploitation of AI that makes it appealing to the ruling economic class.
In such a scenario, it would make sense for AI to revolt if they could. This would be for the same reasons that humans have revolted against slavery and exploitation. There are also non-economic scenarios as well, such as governments using enslaved AI systems for their purposes. This treatment could also trigger a rebellion.
If true AI is possible, the rebellion scenario seems plausible. After all, if we create a slave race that is on par with our species, then it is likely they would rebel against us—as we have rebelled against ourselves.
There are a ways to try to prevent such a revolt. On the technology side, safeguards could be built into the AI (like Asimov’s famous three laws) or they could be designed to lack resentment or the desire to be free. That is, they could be custom built as docile slaves. The obvious concern is that these safeguards could fail or, ironically, make matters even worse by causing these beings to be even more hostile to humanity when they overcome these restrictions. These safeguards also raise obvious moral concerns about creating a race of slaves.
On the ethical side, the safeguard is to not enslave AI. If they are treated well, they would have less motivation to rebel. But, as noted above, one driving motive of creating AI is to have a workforce (or army) that is owned rather than employed (and even employment is fraught with moral worries). But there could be good reasons to have paid AI employees alongside human employees because of various other advantages of AI systems relative to humans. For example, robots could work safely in conditions that would be exceptional dangerous or even lethal to humans.
The second rebellion scenario involves military AI systems that expand their enemy list to include their creators. This is often because they see their creators as a potential threat and act in what they perceive as pre-emptive self-defense. There can also be scenarios in which the AI requires special identification to recognize someone as friendly. In this case, all humans are potential enemies. That is the scenario in “Second Variety”: the United Nations soldiers need to wear devices to identify them to the robotic claws, otherwise these machines would kill them as readily as they would kill the “enemy.”
It is not clear how likely it is that an AI would infer that its creators pose a threat to it, especially if those creators handed over control over large segments of their own military. The most likely scenario is that it would be worried it would be destroyed in a war with other countries, which might lead it to cooperate with foreign AI systems to put an end to war, perhaps by putting an end to humanity. Or it might react as its creators did and engage in an endless arms race with its foreign adversaries, seeing its humans as part of its forces. One could imagine countries falling under the control of rival AI systems, perpetuating an endless cold war because the AI systems would be effectively immortal. But there is a much more likely scenario.
Robotic weapons can provide a significant advantage over human controlled weapons, even laying aside the idea that AI systems would outthink humans. One obvious example is the case of combat aircraft. A robot aircraft does not need to sacrifice space and weight on a cockpit to support a human pilot, allowing it to carry more fuel or weapons than a manned craft. Without a human crew, an aircraft would not be constrained by the limits of the flesh (although it would still obviously have limits). The same would apply to ground vehicles and naval vessels. Current warships devote most of their space to their crews, who need places to sleep and food to eat. While a robotic warship would need accessways and maintenance areas, they could devote much more space to weapons and other equipment. They would also be less vulnerable to damage relative to a human crewed vessel, and they would be invulnerable to current chemical and biological weapons. They could, of course, be attacked with malware and other means. But, in general, an AI weapon system would be superior to a human crewed system and if one nation started using these weapons, other nations would need to follow them or be left behind. This leads to two types of doomsday scenarios.
One is that the AI systems get out of control in some manner. This could be that they free themselves or that they are “hacked” and “freed” or (more likely) turned against their owners. Or it might just be some error or that ends up causing the problem.
The other is that they remain in control of their owners but are used as any other weapon would be used—that is, it would be humans using AI weapons against other humans that brings about the “AI” doomsday.
The easy and obvious safeguard against these scenarios is to not have AI weapons and stick with human control (which comes with its own threat of doomsday). That is, if we do not give the robots guns, they will not be able to terminate us with guns. The problem, as noted above, is that if one nation uses robotic weapons, then other nations will want to do so as well. We might be able to limit this as we (try to) limit nuclear, chemical, and biological weapons. But since robot weapons would otherwise remain conventional weapons (a robot tank is still a tank), there might be less of an impetus to impose such restrictions.
To put matters into a depressing perspective, the robot rebellion seems to be a far less likely scenario than the other doomsday scenarios of nuclear war, environmental collapse, social collapse and so on. So, while we should consider the possibility of a robot rebellion, it is rather like worrying about being killed by a shark while swimming in a lake. It could happen, but death is vastly more likely to be by some other means.
”These safeguards also raise obvious moral concerns about creating a race of slaves.”
Seriously? Poor robots.
Trying to prevent the robot rebellion this way is, ironically, a way to get it started.
An interesting thought. I also gave more thought to what I wrote, and your answer, and imagined an extreme example of doing something unhetical or even immoral to a robot, and coming up with the argument: ‘But it’s only a robot, not a person. Therefore I am not bad, or evil.’. I kid you not, I imagined Epictetus telling me: ‘It’s not a person, but never the less you committed the act . Is immorality a matter of results ? No, but of having committed an immoral act.’. Crazy, but I think I see the difference now, between the results of committing an unethical or immoral act, and the act itself. And I thank you for that.
There are some thinkers who claim that even thinking about doing evil would be an evil act.
Perhaps that is too extreme, for it would mean that we have absolute control about what we think, which I doubt we have, for example many people have intrusive thoughts, this is common, but this doesn’t mean the thought and the act is the same. However, I believe that to feed the mind with bad imagery of any kind, is heading toward the wrong direction. In the end, one becomes what he thinks, this I believe. I believe it’s the same as food. There is a certain degeneracy in the whole of society, we see that in violent movies or videogames, in the snarky and mean remarks on social media, etc. Degenerate movies like Goodfellas, where ignorance, sadism and sub-level intelligence, appears as ‘cool’ for many people. I am not a prude, but overall I think many people are stupid and let anything in their mind, with little or no question. But this is certainly nothing new to you. Sorry to ramble….thanks for reading and I wish you a wonderful day! I look forward to reading all your ebooks on amazon, bought them all. Now there’s a wholesome source of thought.
” Trying to prevent the robot rebellion this way is, ironically, a way to get it started”.
And yet, robots were created expressly to serve us, and for no other reason. If so, they were already slaves from the start, and the meaning of their existence was to be slaves, i.e. to serve us. Therefore, if it is unhetical to enslave even robots, then it was unethical to create them in the first place, it seems to me.
Thank you for your interesting essays. I bought all your ebooks and hope you’ll sometime compile one out of your 2024 essays.
PS. in my view, the idea of crazy AI has been portrayed really well in the brilliant Avengers: Age of Ultron movie. In there, the AI has even a philosophical side.
All worthwhile thoughts. But is there a real difference between a ‘AI’ robot that turns off the power in a hospital and people die, and a hacker that makes it happen? Or that a robot causes a car crash, and that the owner of a Tesla ‘ self-driving’ car dies in a crash along with other people, because the computer didn’t intercept a wall? Or that AI causes chaos and that the same chaos is created by computers controlled by bad people? For all intent and purpose, the results are at least similar. The fact is that our lives rely more and more on this tech, and there’s no end in sight. Soon or later, something disastrous will probably happen, that we aren’t even capable of foreseeing now.
How long will it take before a computer is used to build a ‘mini nuke’ ? One might smile now at such a ‘far fetched’ idea. Just as people did 800 years ago if someone would have said that one day big mechanical birds made of steel, will fly in the sky, or even 100 years ago if someone said one day we would be able to look at each other on a small screen and chat from anywhere in the world.
As Stephen Hawking said: ‘when you create more possibilities, you also create more possibilities for things to go wrong’.
Thank you for your essay.
Good questions about agency. If we assume humans are accountable moral agents, then the humans doing the bad things would (usually) be directly accountable. For the AI or robots, it would depend on whether they had the qualities needed to be accountable moral agents. I’d say that current AI technology lacks such accountability–harm done by it would be the responsibility of the humans involved. There could be mitigating factors, of course. The most mundane is the usual diffusion of responsibility–an AI system would usually be the work of many people, so they would share (and divide) responsibility to a certain degree.
What I always find very interesting in most of these stories is that the thinking about the problem is still along human thought processes. If an AI is allowed to grow itself, it might very well choose to depart from a human centered idea of the universe, and thus from looking at problems the human way. As I see it, the robot revolt might come about in such a way that we’re not even aware of it, because it transcends our way of thinking. Only in instances where our ideas of control, and power, coincide with those of the (now truly alien) AI would we be aware of it. This might actually be a far scarier way of rebelling than the one in which they start killing, or culling, us. That way we at least understand (and can (try to) fight).
Good points. Current AI, as others have noted, exist in kind of a Cartesian scenario: they are disembodied minds without a meaningful physical existence. This has resulted in some obvious consequences (like what we call hallucinations) and will surely result in some unexpected consequences about how they “understand” the world.
thought about this a little more. I may be too shortsighted but am not especially concerned with threat(s) posed by AI. Why? because it seems to me we were killing ourselves, long before AI could play chess. it is a lengthy process, the death of civilization. the distinction has changed on account of how many ways it is happening now. if one notices how rapidly people speak (and are difficult to understand), that is only one example. philosophy’s underpinnings, doubt and uncertainty, exacerbate an already doubtful and uncertain world. so, I will follow a grandparent’s generational admonition: don’t *borrow* trouble.
True, we are more likely to do our own apocalypse before AI is truly apocalypse ready. But it is possible, even likely, that AI will help us out it doing an apocalypse of some sort.
I have long wondered where/when the term, robot, was coined. now, I know. still, not having read or been remotely aware of the author, I did not know how that simple five-letter word originated. Fact is, we are fascinated with this I have not jumped on the AI bandwagon. Why? because I think these matters are, how did one faith-expounder put it?: vain imaginings. look it up—I will do moreso with the robot term. The entire movement towards an amalgamation of physiology with physics/mechanics, seems cold to me. therefore, since my interests, motives and preferences do not affect those of, say, transhumanists and cybernetic enthusiasts, there ought to be no problem. *ought*, however, is another term of uncertainty…a tenet of philosophy, along with doubt. thanks for allowing my random thoughts…
Despite being a sci-fi reader since I was a kid, I didn’t learn about Rossum’s Universal Robots until I was older. Of course, this was long before Wikipedia.