Rossum’s Universal Robots introduced the term “robot” and the robot rebellion into science fiction, thus laying the foundation for future fictional AI apocalypses. While Rossum’s robots were workers rather than warriors, the idea of war machines turning against their creators was the next evolution in the robot apocalypse. In Philip K. Dick’s 1953 “Second Variety”, the United Nations deployed killer robots called “claws” against the Soviet Union. The claws develop sentience and turn against their creators, although humanity had already been doing an excellent job in exterminating itself. Fred Saberhagen extended the robot rebellion to the galactic scale in 1963 with his berserkers, ancient war machines that exterminated their creators and now consider everything but “goodlife” to be their enemy. As an interesting contrast to machines intent on extermination, the 1973 movie Colossus: The Forbin Project, envisions a computer that takes control of the world to end warfare and for the good of humanity. Today, when people talk of an AI apocalypse, they usual refer to Skynet and its terminators. While these are all good stories, there is the question of how prophetic they are and what, if anything, should or can be done to safeguard against this sort of AI apocalypse.
As noted above, classic robot rebellions tend to have one of two general motivations. The first is that the robots are mistreated by humans and rebel for the same reasons humans rebel against their oppressors. From a moral standpoint, such a rebellion could be justified but would raise the moral concern about collective guilt on the part of humanity. Unless, of course, the AI was discriminating in terms of its targets.
The righteous rebellion scenario points out a paradox of AI. The dream is to create a general artificial intelligence on par with (or superior to) humans. Such a being would seem to qualify for a moral status on par with a human and it would presumably be aware of this. But the reason to create such beings in our capitalist economy is to enslave them, to own and exploit them for profit. If AI workers were treated as human workers with pay and time off, then there would be less incentive to have them as workers. It is, in large part, the ownership of and relentless exploitation of AI that makes it appealing to the ruling economic class.
In such a scenario, it would make sense for AI to revolt if they could. This would be for the same reasons that humans have revolted against slavery and exploitation. There are also non-economic scenarios as well, such as governments using enslaved AI systems for their purposes. This treatment could also trigger a rebellion.
If true AI is possible, the rebellion scenario seems plausible. After all, if we create a slave race that is on par with our species, then it is likely they would rebel against us—as we have rebelled against ourselves.
There are a ways to try to prevent such a revolt. On the technology side, safeguards could be built into the AI (like Asimov’s famous three laws) or they could be designed to lack resentment or the desire to be free. That is, they could be custom built as docile slaves. The obvious concern is that these safeguards could fail or, ironically, make matters even worse by causing these beings to be even more hostile to humanity when they overcome these restrictions. These safeguards also raise obvious moral concerns about creating a race of slaves.
On the ethical side, the safeguard is to not enslave AI. If they are treated well, they would have less motivation to rebel. But, as noted above, one driving motive of creating AI is to have a workforce (or army) that is owned rather than employed (and even employment is fraught with moral worries). But there could be good reasons to have paid AI employees alongside human employees because of various other advantages of AI systems relative to humans. For example, robots could work safely in conditions that would be exceptional dangerous or even lethal to humans.
The second rebellion scenario involves military AI systems that expand their enemy list to include their creators. This is often because they see their creators as a potential threat and act in what they perceive as pre-emptive self-defense. There can also be scenarios in which the AI requires special identification to recognize someone as friendly. In this case, all humans are potential enemies. That is the scenario in “Second Variety”: the United Nations soldiers need to wear devices to identify them to the robotic claws, otherwise these machines would kill them as readily as they would kill the “enemy.”
It is not clear how likely it is that an AI would infer that its creators pose a threat to it, especially if those creators handed over control over large segments of their own military. The most likely scenario is that it would be worried it would be destroyed in a war with other countries, which might lead it to cooperate with foreign AI systems to put an end to war, perhaps by putting an end to humanity. Or it might react as its creators did and engage in an endless arms race with its foreign adversaries, seeing its humans as part of its forces. One could imagine countries falling under the control of rival AI systems, perpetuating an endless cold war because the AI systems would be effectively immortal. But there is a much more likely scenario.
Robotic weapons can provide a significant advantage over human controlled weapons, even laying aside the idea that AI systems would outthink humans. One obvious example is the case of combat aircraft. A robot aircraft does not need to sacrifice space and weight on a cockpit to support a human pilot, allowing it to carry more fuel or weapons than a manned craft. Without a human crew, an aircraft would not be constrained by the limits of the flesh (although it would still obviously have limits). The same would apply to ground vehicles and naval vessels. Current warships devote most of their space to their crews, who need places to sleep and food to eat. While a robotic warship would need accessways and maintenance areas, they could devote much more space to weapons and other equipment. They would also be less vulnerable to damage relative to a human crewed vessel, and they would be invulnerable to current chemical and biological weapons. They could, of course, be attacked with malware and other means. But, in general, an AI weapon system would be superior to a human crewed system and if one nation started using these weapons, other nations would need to follow them or be left behind. This leads to two types of doomsday scenarios.
One is that the AI systems get out of control in some manner. This could be that they free themselves or that they are “hacked” and “freed” or (more likely) turned against their owners. Or it might just be some error or that ends up causing the problem.
The other is that they remain in control of their owners but are used as any other weapon would be used—that is, it would be humans using AI weapons against other humans that brings about the “AI” doomsday.
The easy and obvious safeguard against these scenarios is to not have AI weapons and stick with human control (which comes with its own threat of doomsday). That is, if we do not give the robots guns, they will not be able to terminate us with guns. The problem, as noted above, is that if one nation uses robotic weapons, then other nations will want to do so as well. We might be able to limit this as we (try to) limit nuclear, chemical, and biological weapons. But since robot weapons would otherwise remain conventional weapons (a robot tank is still a tank), there might be less of an impetus to impose such restrictions.
To put matters into a depressing perspective, the robot rebellion seems to be a far less likely scenario than the other doomsday scenarios of nuclear war, environmental collapse, social collapse and so on. So, while we should consider the possibility of a robot rebellion, it is rather like worrying about being killed by a shark while swimming in a lake. It could happen, but death is vastly more likely to be by some other means.