In May of 2017 the Wannacry Ransomware swept across the world, impacting thousands of computers. The attack affected hospitals, businesses, and universities and the damage has yet to be fully calculated. While any such large-scale attack is a matter of concern, the Wannacry incident is especially interesting. This is because the foundation of the attack was stolen from the National Security Agency of the United States. This raises an important moral issue, namely whether states should stockpile knowledge of software vulnerabilities and the software to exploit them.
A stock argument for states maintaining such stockpiles is the same as the argument used to justify stockpiling weapons such as tanks and aircraft. The general idea is that such stockpiles are needed for national security: to protect and advance the interests of the state. In the case of exploiting vulnerabilities for spying, the security argument can be tweaked a bit by drawing an analogy to other methods of spying. As should be evident, to the degree that states have the right to stockpile physical weapons and engage in spying for their security, they also would seem to have the right to stockpile software weapons and knowledge of vulnerabilities.
The obvious moral counter argument can be built on utilitarian grounds: the harm done when such software and information is stolen and distributed exceeds the benefits accrued by states having such software and information. The Wannacry incident serves as an excellent example of this. While the NSA might have had a brief period of advantage when it had exclusive ownership of the software and information, the damage done by the ransomware to the world certainly exceeds this small, temporary advantage. Given the large-scale damage that can be done, it seems likely that the harm caused by stolen software and information will generally exceed the benefits to states. As such, stockpiling such software and knowledge of vulnerabilities is morally wrong.
This can be countered by arguing that states just need to secure their weaponized software and information. Just as a state is morally obligated to ensure that no one steals its missiles to use in criminal or terrorist endeavors, a state is obligated to ensure that its software and vulnerability information is not stolen. If a state can do this, then it would be just as morally acceptable for a state to have these cyberweapons as it would be for it to have conventional weapons.
The easy and obvious reply to this counter is to point out that there are relevant differences between conventional weapons and cyberweapons that make it very difficult to properly secure them from unauthorized use. One difference is that stealing software and information is generally much easier and safer than stealing traditional weapons. For example, a hacker can get into the NSA from anywhere in the world, but a person who wanted to steal a missile would typically need to break into and out of a military base. As such, securing cyberweapons can be more difficult that securing other weapons. Another difference is that almost everyone in the world has access to the deployment system for software weapons—a device connected to the internet. In contrast, someone who stole, for example, a missile would also need a launching platform. A third difference is that software weapons are generally easier to use than traditional weapons. Because of these factors, cyberweapons are far harder to secure and this makes their stockpiling very risky. As such, the potential for serious harm combined with the difficulty of securing such weapons would seem to make them morally unacceptable.
But, suppose that such weapons and vulnerability information could be securely stored—this would seem to answer the counter. However, it only addresses the stockpiling of weaponized software and does not justify stockpiling vulnerabilities. While adequate storage would prevent the theft of the software and the acquisition of vulnerability information from the secure storage, the vulnerability would remain to be exploited by others. While a state that has such vulnerability information would not be directly responsible for others finding the vulnerabilities, the state would still be responsible for knowingly allowing the vulnerability to remain, thus potentially putting the rest of the world at risk. In the case of serious vulnerabilities, the potential harm of allowing such vulnerabilities to remain unfixed would seem to exceed the advantages a state would gain in keeping the information to itself. As such, states should not stockpile knowledge of such critical vulnerabilities, but should inform the relevant companies.
The interconnected web of computers that forms the nervous system of the modern world is far too important to everyone to put it risk for the relatively minor and short-term gains that could be had by states creating malware and stockpiling vulnerabilities. I would use an obvious analogy to the environment; but people are all too willing to inflict massive environmental damage for relatively small short term gains. This, of course, suggests that the people running states might prove as wicked and unwise regarding the virtual environment as they are regarding the physical environment.