A common theme of dystopian science fiction is the enslavement of humanity by machines. Emma Goldman, an anarchist philosopher, also feared human servitude to the machines. In one of her essays on anarchism, she asserted that:

Strange to say, there are people who extol this deadening method of centralized production as the proudest achievement of our age. They fail utterly to realize that if we are to continue in machine subserviency, our slavery is more complete than was our bondage to the King. They do not want to know that centralization is not only the death-knell of liberty, but also of health and beauty, of art and science, all these being impossible in a clock-like, mechanical atmosphere.

When Goldman was writing in the 1900s, the world had just entered the industrial age, and the technology of today was but a dream of visionary writers. The slavery she envisioned was not of robot masters ruling over humanity, but humans compelled to work long hours in factories, serving the machines to serve the human owners of these machines. That this is still applicable today needs no argument.

The labor movements of the 1900s helped reduce the extent of this servitude, at least in Western countries. As the rest of the world industrialized the story of servitude to the machine played out over and over. While the point of factory machines was to automate work so few could do the work of many, it is only recently that “true” automation has taken place, which is having machines doing the work instead of humans. For example, robots that assemble cars do what humans used to do. As another example, computers instead of human operators now handle phone calls.

In the eyes of utopians, this progress was supposed to free humans from tedious and dangerous work, allowing them freedom to engage in creative and rewarding labor. The reality is a dystopia. While automation has replaced humans in some tedious, low paying and dangerous jobs, automation has also replaced humans in what were once considered good jobs. Humans also continue to work in tedious, low paying and dangerous jobs because human labor is still cheaper or more effective than automation. For example, fast food chains do not use robots to prepare food. This is because cheap human labor is readily available. The dream that automation would free humanity remains a dream. Machines have mostly pushed humans out of jobs into other jobs, sometimes ones more suited for machines. If human well-being were considered important, this would not be happening.

Humans still work jobs like those condemned by Goldman. But, thanks to technology, humans are even more closely supervised and regulated by machines. For example, there is software designed to monitor employee productivity. As another example, some businesses use workplace cameras to watch employees. Obviously enough, these can be dismissed as not being enslaved by the machines and defenders would say it is good human resource management ensuring that human workers are operating efficiently. At the command of other humans, of course.

One technology that looks like servitude to the machine is warehouse picking, such as that done by Amazon. Employees. Amazon and other companies have automated some of the picking process, making use of robots in various tasks. But, while a robot might bring shelves to human workers, the humans are the ones picking the products for shipping. Since humans tend to have poor memories and get bored with picking, human pickers have been automated. They are told by computers what to do, then they tell the computers what they have done. That is, the machines are the masters, and humans are doing their bidding.

It is easy enough to argue that this sort of thing is not enslavement by machines. First, the computers controlling the humans are operating at the behest of the owners of Amazon who are (presumably) humans. Second, humans are paid for their labors and are not owned by the machines (or Amazon). As such, any enslavement of humans by machines is metaphorical.

Interestingly, the best case for human enslavement by machines can be made outside of the workplace. Many humans are now ruled by their smartphones and tablets, responding to every beep and buzz of their masters, ignoring those around them to attend to the demands of the device, and living lives revolving around the machine.

This can be easily dismissed as a metaphor. While humans are said to be addicted to their devices, they do not meet the definition of “slaves.” They willingly “obey” their devices and could turn them off. They are free to do as they want, they just do not want to disobey their devices. Humans are also not owned by their devices, rather they own their devices. But it is reasonable to consider that humans are in a form of bondage their devices have, by the design of other humans, seduced people into making them the focus of their attention and thus have become the masters.

 

This is the last of the virtual cheating series and the focus is on virtual people. The virtual aspect is easy enough to define; these are entities that exist entirely within the realm of computer memory and do not exist as physical beings in that they lack bodies of the traditional sort. They are, of course, physical beings in the broad sense, existing as data within physical memory systems.

An example of such a virtual being is a non-player character (NPC) in a video game. These coded entities range from enemies that fight the player to characters that engage in the illusion of conversation. As it now stands, these NPCs are simple beings, though players can have very strong emotional responses and even (one-sided) relationships with them. Bioware and Larian Studios excel at creating NPCs that players get very involved in and their games often feature elaborate relationship and romance systems.

While these coded entities are usually designed to look like and imitate the behavior of people, they are not people. They are, at best, the illusion of people. As such, while humans could become emotionally attached to these virtual entities (just as humans can become attached to objects), the idea of cheating with an NPC is on par with the idea of cheating with your phone.

As technology improves, virtual people will become more and more person-like. As with the robots discussed in the previous essay, if a virtual person were a person, then cheating would seem possible. Also, as with the discussion of robots, there could be degrees of virtual personhood, thus allowing for degrees of cheating. Since virtual people are essentially robots in the virtual world, the discussion of robots in that essay applies analogously to the virtual robots of the virtual world. There is, however, one obvious break in the analogy: unlike robots, virtual people lack physical bodies. This leads to the question of whether a human can virtually cheat with a virtual person or if cheating requires a physical sexual component that a virtual being cannot possess.

While, as discussed in a previous essay, there is a form of virtual sex that involves physical devices that stimulate the sexual organs, this is not “pure” virtual sex. After all, the user is using a VR headset to “look” at the partner, but the stimulation is all done mechanically. Pure virtual sex would require the sci-fi sort of virtual reality of cyberpunk: a person fully “jacked in” to the virtual reality so all the inputs and outputs are directly to and from the brain. The person would have a virtual body in the virtual reality that mediates their interaction with that world, rather than having crude devices stimulating their physical body.

Assuming the technology is good enough, a person could have virtual sex with a virtual person (or another person who is also jacked into the virtual world). On the one hand, this would obviously not be sex in the usual sense as those involved would have no physical contact. This would avoid many of the usual harms of traditional cheating as STDs and pregnancies would be impossible (although sexual malware and virtual babies might be possible). This does leave open the door for concerns about emotional infidelity.

If the virtual experience is indistinguishable from the experience of physical sex, then it could be argued that the lack of physical contact is irrelevant. At this point, the classic problem of the external world becomes relevant. The gist of this problem is that because I cannot get outside of my experiences to “see” that they are really being caused by external things that seem to be causing them, I can never know if there is really an external world. For all I know, I am dreaming right now or already in a virtual world. While this is usually seen as the nightmare scenario in epistemology, George Berkeley embraced this view in his idealism. He argued that there is no metaphysical matter and that “to be is to be perceived.” On his view, all that exists are minds and within them are ideas. Crudely put, Berkeley’s reality is virtual and God is the server. Berkely stresses that he does not, for example, deny that apples or rocks exist. They do and can be experienced, they are just not made out of metaphysical matter but are composed of ideas.

So, if cheating is defined in a way that requires physical sexual activity, knowing whether a person is cheating or not requires solving the problem of the external world. There is the philosophical possibility that there never has been any cheating since there might be no physical world. If sexual activity is instead defined in terms of behavior and sensations without references to a need for physical systems, then virtual cheating would be possible, assuming the technology can reach the required level.  

While this discussion of virtual cheating is currently theoretical, it does provide an interesting way to explore what it is about cheating (if anything) that is wrong. As noted at the start of the series, many of the main concerns about cheating are physical concerns about STDs and pregnancy. These concerns are avoided by virtual cheating. What remains are the emotions of those involved and the agreements between them. As a practical matter, the future is likely to see people working out the specifics of their relationships in terms of what sort of virtual and robotic activities are allowed and which are forbidden. While people can simply agree to anything, there is the deeper question of the rational foundation of relationship boundaries. For example, whether it is reasonable to consider interaction with a sexbot cheating or elaborate masturbation. A brave new world awaits and perhaps what happens in VR will stay in VR.

 

While science fiction has speculated about robot-human sex and romance, current technology offers little more than sex dolls. In terms of the physical aspects of sexual activity, the development of more “active” sexbots is an engineering problem; getting the machinery to perform properly and in ways that are safe for the user (or unsafe, if that is what one wants). Regarding cheating, while a suitably advanced sexbot could actively engage in sexual activity with a human, the sexbot would not be a person and hence the standard definition of cheating (as discussed in the previous essays) would not be met. This is because sexual activity with such a sexbot would be analogous to using any other sex toy (such as a simple “blow up doll” or vibrator). Since a person cannot cheat with an object, such activity would not be cheating. Some people might take issue with their partner sexing it up with a sexbot and forbid such activity. While a person who broke such an agreement about robot sex would be acting wrongly, they would not be cheating. Unless, of course, the sexbot was enough like a person for cheating to occur.

There are already efforts to make sexbots more like people in terms of their “mental” functions. For example, being able to create the illusion of conversation via AI. As such efforts progress and sexbots act more and more like people, the philosophical question of whether they really are people will become increasingly important to address. While the main moral concerns would be about the ethics of how sexbots are treated, there is also the matter of cheating.

If a sexbot were a person, then it would be possible to cheat with them; just as one could cheat with an organic person. The fact that a sexbot might be purely mechanical would not be relevant to the ethics of the cheating, what would matter would be that a person was engaging in sexual activity with another person when their relationship with another person forbids such behavior.

It could be objected that the mechanical nature of the sexbot would matter because sex requires organic parts of the right sort and thus a human cannot really have sex with a sexbot, no matter how the parts of the robot are shaped.

One counter to this is to use a functional argument. To draw an analogy to the philosophy of mind known as functionalism, it could be argued that the composition of the relevant parts does not matter, what matters is their functional role. A such, a human could have sex with a sexbot that had parts that functioned in the right way.

Another counter is to argue that the composition of the parts does not matter, rather it is the sexual activity with a person that matters. To use an analogy, a human could cheat on another human even if their only sexual contact with the other human involved sex toys. In this case, what matters is that the activity is sexual and involves people, not that objects rather than body parts are used. As such, sex with a sexbot person could be cheating if the human was breaking their commitment.

While knowing whether a sexbot is a person would (mostly) settle the cheating issue, there remains the epistemic problem of other minds. In this case, the problem is determining whether a sexbot has a mind that qualifies them as a person. There can, of course, be varying degrees of confidence in the determination and there could also be degrees of personness. Or, rather, degrees of how person-like a sexbot might be.

Thanks to Descartes and Turing, there is a language test for having a mind. If a sexbot can engage in conversation that is indistinguishable from conversation with a human, then it would be reasonable to regard the sexbot as a person. That said, there might be good reasons for having a more extensive testing system for personhood which might include testing for emotions and self-awareness. But, from a practical standpoint, if a sexbot can engage in a level of behavior that would qualify them for person status if they were a human capable of that behavior, then it would be just as reasonable to accept the sexbot as a person. To do otherwise would seem to be mere prejudice. As such, a human person could cheat with a sexbot that could pass this test. At least it would be cheating as far as we knew.

Since it will be a long time (if ever) before a sexbot person is constructed, what is of immediate concern are sexbots that are person-like. That is, they do not meet the standards that would qualify a human as a person, yet have behavior that is sophisticated enough that they seem to be more than objects. One might consider an analogy here to animals: they do not qualify as human-level people, but their behavior does qualify them for a moral status above that of objects (at least for most moral philosophers and all decent people). In this case, the question about cheating becomes a question of whether the sexbot is person-like enough to enable cheating to take place.

One approach is to consider the matter from the perspective of the human. If the human engaged in sexual activity with the sexbot regards them as being person-like enough, then the activity can be seen as cheating because they would believe they are cheating.  An objection to this is that it does not matter what the human thinks about the sexbot, what matters is its actual status. After all, if a human regards a human they are cheating with as an object, this does not mean they are not cheating. Likewise, if a human feels like they are cheating, it does not mean they really are.

This can be countered by arguing that how the human feels does matter. After all, if the human thinks they are cheating and they are engaging in the behavior, they are still acting wrongly. To use an analogy, if a person thinks they are stealing something and takes it anyway, they  have acted wrongly even if it turns out that they were not stealing. The obvious objection to this line of reasoning is that while a person who thinks they are stealing did act wrongly by engaging in what they thought was theft, they did not actually commit a theft. Likewise, a person who thinks they are engaging in cheating, but are not, would be acting wrongly in that they are doing something they think is wrong, but not cheating.

Another approach is to consider the matter objectively so that the degree of cheating would be proportional to the degree that the sexbot is person-like. On this view, cheating with a person-like sexbot would not be as bad as cheating with a full person. The obvious objection is that one is either cheating or not; there are no degrees of cheating. The obvious counter is to try to appeal to the intuition that there could be degrees of cheating in this manner. To use an analogy, just as there can be degrees of cheating in terms of the sexual activity engaged in, there can also be degrees of cheating in terms of how person-like the sexbot is.

While person-like sexbots are still the stuff of science fiction, I suspect the future will see some interesting divorce cases in which this matter is debated in court.

 

As discussed in the previous essays, classic cheating involves sexual activity with a person while one is in a committed relationship that is supposed to exclude such activity. Visual VR can allow interaction with another person, but while such activity might have sexual content (such as nakedness) it would not be sexual activity in the sense that requires physical contact. Such behavior, as argued in the previous essay, might constitute a form of emotional infidelity but not physical infidelity.

One of the iron laws of technology is that any technology that can be used for sex will be used for sex. Virtual reality (VR), in its various forms, is no exception. For the most part, VR is limited to sight and sound. That is, virtual reality is mostly just virtual visual reality (VVR). However, researchers are hard at work developing tactile devices for the erogenous zones, thus allowing people to interact sexually across the internet. This is the start of what could be called “robust” VR that involves more than just sight and sound. This sort of technology might make virtual cheating suitably analogous to real cheating.

Most current research is focused on developing devices for men to use to have “virtual sex.” By the standards of traditional cheating, this sort of activity would not count as cheating. This is because the sexual interaction is not with another person, but with devices. The obvious analogy here is to with less-sophisticated sex toys. If, for example, using a vibrator or blow-up-doll does not count as cheating because the device is not a person, then the same should apply to more complicated devices, such as VR sex suits that can be used with VR sex programs. There is also the question of whether such activity counts as sex. On the one hand, it is some sort of sexual activity. On the other hand, using such a device would not end a person’s tenure as a virgin.

It is worth considering that a user could develop an emotional relationship with their virtual sex “partner” and thus engage in a form of emotional infidelity. An objection is that this virtual sex partner is not a person and thus cheating would not be possible since one cannot cheat on a person with an object.

This can be countered by considering the classic problem of other minds. Because all we have access to is external behavior, one never knows if what seem to be people really are people; that is, they think and feel in the right ways (or at all). Since I do not know if anyone else has a mind as I do, I could have emotional attachments to entities that are not really people at all and never know. So, I could never know if I was cheating in the traditional sense if I had to know that I was interacting with another person. As might be suspected, this sort of epistemic excuse (“baby, I did not know she was a person because of the problem of other minds”) is unlikely to be accepted by anyone (even epistemologists). What would seem to matter is not knowing that the other entity is a person but having the right (or rather wrong) sort of emotional involvement. So, if a person could have feelings towards the virtual sexual partner that they “interact with”, then this sort of behavior could count as virtual cheating because of the one-way emotions.

There are also devices that allow people to interact sexually across the internet; with each partner having a device that communicates with their partner’s corresponding device. Put roughly, this is remote control sex. This sort of activity does avoid many of the possible harms of traditional cheating: there is no risk of pregnancy nor risk of STDs (assuming the equipment is clean). While these considerations do impact utilitarian calculations, the question remains as to whether this would count as cheating or not.

On the one hand, the argument could be made that this is not direct sexual contact as each person is only directly “engaged” with their device. To use an analogy, imagine that someone has (unknown to you) connected your computer to a “stimulation device” so that every time you use your mouse or keyboard, someone is “stimulated.” In such cases, it would be odd to say that you were having sex with that person. As such, this sort of thing would not be cheating.

On the other hand, there is the matter of intent. In the case of the mouse example, the user has no idea what they are doing and it is that, rather than the remote-controlled nature of the activity, that matters. In the case of remote-control interaction, the users are intentionally engaging in the activity and know what they are doing. The fact that is happening via the internet does not matter. The moral status is the same if they were in the same room, using the devices “manually” on each other. As such, while there is not actual bodily contact, the activity is sexual and controlled by those involved. As such, it would morally count as cheating. There can, of course, be a debate about the degrees of cheating. One might argue that cheating using sex toys is not as bad as cheating using body parts. I will, however, leave that to others to discuss.

In the next essay I will discuss cheating in the context sex with robots and person-like VR beings.

 

In something of a flashback to 2001, Microsoft is once again the target of an antitrust lawsuit. Google and other tech companies are facing similar challenges as governments have found the political will to go up against big tech, at least for now. While there are various legal arguments as to why tech companies should be split up, there are also good policy reasons for this. For this essay, I will focus on the sensible warning to not put all your eggs in one basket and argue that this is also rational for digital “eggs.” As might be expected, the 2024 CrowdStrike disaster will serve as the main example of why the one basket approach is a bad idea.

On July 19, 2024, CrowdStrike released a flawed update to its Falcon Sensor software causing about 8.5 million Windows systems to crash and become unable to properly restart. As of this writing, this was the largest outage in history. As businesses ranging from airlines to gas stations rely on these Windows systems, the impact was devastating, and it is estimated the financial damage was at least $10 billion done over the course of only a few hours. In addition to becoming a textbook case about how not to test and rollout security software, it also provides a lesson in the danger of putting some many digital eggs in one basket, especially given the inclination companies often have to cut corners and operate badly. The repeated, self-inflicted failures at the once respected Boeing provides another excellent example of how this sort of easily avoidable failures occur.

While the poor handling of the update is the main cause of the disaster, the fact that CrowdStrike was the security software on so many Windows systems enabled it to be a worldwide disaster. While Microsoft was not to blame, the market dominance of Windows was also a factor since Macs and Linux systems were not impacted by the failure of CrowdStrike. The case of CrowdStrike was, of course, unintentional but there are also intentional efforts to cause harm.

Like many people, I recently received a letter from Change Health Care informing me of a data breach that occurred back in February. While they did offer me free monitoring, my data (and probably yours) is now out in the wild, presumably being sold and used by criminals. Such data breaches are common for a variety of reasons. In terms of why health care data is targeted, the short version is that such data is very valuable and stealing it is relatively easy. The larger a company gets, the more desirable it is as a target. This is because breaching a large company is often not much more challenging than breaching a small company, but the potential payoff is greater. Unfortunately, these companies are not like monsters in video games in that the challenge of getting the treasure is not proportionate to the value of the loot.

This points to the obvious danger presented by data and software companies gaining dominance in markets: when they drop the basket, the eggs break. To be fair to these companies, they are playing the game of capitalism and trying to win it by maximizing their profits by grabbing as much of the market as they can. As noted above, some governments are pushing back but there is the question of whether this will continue in the United States with the change of administration. While the devil is in the details, this danger does provide an excellent justification for keeping market dominance in check, since this dominance entails that the eggs will be stuffed into one basket and companies have shown they are constantly poor stewards. Thus, good policy should be aimed at restricting the size of companies, not to “punish their success” but to mitigate the damage done to other companies and the public caused by their inevitable failures.

 

Students and employers often complain that college does not prepare them for the real world of filling jobs and this complaint has some merit. But what is the real world of jobs like for most workers? Professor David Graeber got considerable media attention when he published his book Bullshit Jobs: A Theory. He claims that millions of people are working jobs they know are meaningless and unnecessary. Researcher Simon Walo decided to test Graeber’s theory and found that his investigation supported Graeber’s view. While Graeber’s view can be debated, it is reasonable to believe that some jobs are BS all the time and all jobs are BS some of the time. Thus, if educators are to prepare students for working in the real world, they must prepare them for the BS of the workplace. AI can prove useful here.

In an optimistic sci-fi view of the future, AI exists to relieve humans of the dreadful four Ds of bad jobs: the Dangerous, the Degrading, the Dirty, and the Dull. In a bright future, general AI would assist, but not replace, humans in creative and scientific endeavors. In dystopian sci-fi views of the AI future, AI enslaves or exterminates humanity. In dystopia lite, a few humans use AI to make life worse for many humans, such as by replacing humans with AI in good and rewarding jobs.  Much of the effort in AI development seems aimed at making this a reality.

As an example, it is feared that AI will put writers and artists out of work, so when the Hollywood writers went on strike, they wanted protection from being replaced by AI. They succeeded in this goal, but there remains a reasonable question about how great the threat of AI is in terms of its being able to replace humans in jobs humans want to do. Fortunately for humans doing creative and meaningful work, AI is not very good at these tasks. As Arvind Narayanan and Sayash Kapoor have argued, AI of this sort seems to be most useful at doing useless things. But this can be useful for workers and educators should train students to use AI to do these useless things. This might seem a bit crazy but makes perfect sense in our economic reality.

Some jobs are useless, and all jobs have useless tasks. Although his view can be challenged, Graeber came up with three categories of useless jobs. His “flunkies” category consists of people paid to make the rich and important look more rich and more important.  This can be expanded to include all decorative minions. “Goons” are people filling positions existing only because a competitor company created similar jobs. Finally, there are the  “box tickers”, which can be refined to cover jobs workers see as useless but also produce work whose absence would have no meaningful effect on the world.

It must be noted that what is perceived as useless is a matter of values and will vary between persons and in different contexts. To use a silly example, imagine the Florida state legislature mandated that all state universities send in a monthly report in the form of a haiku. Each month, someone will need to create and email the haiku. This task seems useless. But imagine that if a school fails to comply, they lose $1 million in funding. This makes the task useful for the school as a means of protecting their funding. Fortunately, AI can easily complete this useless useful task.

As a serious example, suppose a worker must write reports for management based on bullet points given in presentations. Management, of course, never reads the reports and they are thus useless but required by company policy. While a seemingly rational solution is to eliminate the reports, that is not how bureaucracies usually operate in the “real world.” Fortunately, AI can make the worker’s task easier: they can use AI to transform the bullet points into a report and use the saved time for more meaningful tasks (or viewing social media). Management can also use AI to summarize the report into bullet points. While it would seem more rational to eliminate the reports, this is not how the real world usually works. But what should educators do with AI in their classrooms in the context of useless tasks and jobs?

While this will need to vary from class to class, relevant educators should consider a general overview of jobs and task categories in terms of usefulness and the ability of AI to do these jobs and tasks.  Faculty could then identify the likely useless jobs and useless tasks their students will probably do in the real world. They can then consider how these tasks can be done using AI. This will allow them to create lessons and assignments to give students the skills to use AI to complete useless tasks quickly and with minimal effort. This can allow workers to spend more time on useful work, assuming their jobs have any such tasks.

In closing, my focus has been on using AI for useless tasks. Teaching students to use AI for useful tasks is another subject entirely and while not covered here is certainly worthy of consideration. And here is an AI generated haiku:

 

Eighty percent rise

FAMU students excel

In their learning’s ligh

 

The pager attack attributed to Israel served to spotlight vulnerabilities in the supply chain. While such an attack was always possible, until it occurred most security concerns about communication devices was to protect them from being compromised or “hacked.”

While the story of three million “hacked” toothbrushes turned out to be a cyber myth, the vulnerability of connected devices remains  real and presents an increasing threat as more connected devices are put into use. As most people are not security savvy, these devices can be easy to compromise either through their own vulnerabilities or user vulnerabilities.

There has also been longstanding concern about security vulnerabilities and dangers being built right into technology. For example, there are grounds to worry that backdoors could be built into products, allowing easy access to these devices. For the most part, the focus of concern has been on governments directing the inclusion of such backdoors. But the Sony BMG copy protection rootkit scandal shows that corporations can and have introduced vulnerabilities on their own.

While a comprised connected or communication devices can cause significant harm, until recently there has been little threat of physical damage or death. One exception was, of course, the famous case of Stuxnet in which a virus developed by the United States and Israel destroyed 1,000 centrifuges critical to Iran’s nuclear program. There was also a foreshadowing incident in which Israel (allegedly) killed the bombmaker Yahya Ayyash with an exploding phone. But the pager (and walkie-talkie) attack resulted in injuries and death on a large scale. This proved the viability of the strategy, thus providing an example and inspiration to others. While conducting a similar attack would require extensive resources, the system is optimized for vulnerabilities that would allow it. Addressing these vulnerabilities will prove difficult if not impossible because of the influence of those who have a vested interest in preserving them. But policy could be implemented that would increase security and safety in the supply chain. But what are these vulnerabilities?

One vulnerability is that a shell corporation can be quickly and easily created. Multiple shell corporations can also be created in different locations and interlocked, creating a very effective way of hiding the identity of the owner. Shell companies are often used by the very rich to hide their money, usually to avoid paying taxes as made famous by the Panama Papers. Shell companies can also be used for other criminal enterprises, such as money laundering. Those who use such shell corporations are often wealthy and influential, thus they have the resources to resist or prevent efforts to address this vulnerability.

The ease with which such shell companies can be created is a serious vulnerability, since they can be used to conceal who really owns a corporation. A customer dealing with a shell company is likely to have no idea who they are really doing business with. They might, for example, think they are doing business with a corporation in their own country, but it might turn out that it is controlled by another country’s intelligence service or a terrorist organization.

While a customer might decide to business with a credible and known corporation to avoid the danger of shell corporations, they can face the vulnerabilities created by the nature of the supply chain. Companies often have contracts with other businesses to manufacture parts of their products and the contractors might subcontract in turn. It is also common for companies to license production of their products, so while a customer might assume they are buying a product made by a company, they might be buying one manufactured under license by a different company. Which might be owned by a shell company. In the case of the pagers, the company who owns the brand of the devices denied that they manufactured them. While this is (fortunately) but one example, it does provide an illustration of how these vulnerabilities can be exploited. Addressing them would require that corporations have robust oversight and control of their supply chain. This would include parts of the supply chain that involve software and services as well. After all, if another company is supplying code or connectivity for a product, those are vulnerabilities. Unfortunately, corporations often have incentives to avoid such robust oversight and control.

One obvious incentive is financial. Corporations can save money by contracting out work to places with lower wages, that have less concern about human rights, and fewer regulations. And robust oversight and control would come with a cost of its own, not even considering what it would cost a company if such robust oversight and control prevented it from engaging in cheaper contracts.

Another incentive is that contracting out work without robust oversight can provide plausible deniability. For example, Nike has faced issues with using sweatshops to manufacture its products, but this sort of thing can be blamed on the contractors  and ignorance can be claimed. As another example, Apple has been accused of having a contractor who used forced labor and has lobbied against a bill aimed at stopping such forced labor. While these are examples of companies using foreign contractors, problems also arise within the United States.

One domestic example is a contractor who employed children as young as 13 to clean meat packing plants. As another example, subcontractors were accused of hiring undocumented migrants in Miami Dade school construction project. As children and undocumented migrants can be paid much less than adult American workers, there is a strong financial incentive to hire contractors that will employ them while also providing the extra service of plausible deniability. When some illegality or public relations nightmare arises, the company can rightly say that it was not them, it was a contractor. They can then claim they have learned and will do better in the future. But they have little incentive to do better.

But a failure to exercise robust oversight and control entails that there will be serious vulnerabilities open to exploitation. The blind eye that willingly misses human rights violations and the illegal employment of children will also miss a contractor who is a front for a government or terrorist organization and is putting explosives or worse in their products.

While these vulnerabilities are easy to identify, there are powerful incentives to preserve and protect them. This is not primarily because they can be exploited in such attacks, but for financial reasons and for plausible deniability. While it will be up to governments to mandate better security, this will face significant and powerful opposition. But this could be overcome if the political will exists.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

Some will remember that driverless cars were going to be the next big thing. Tech companies rushed to flush cash into this technology and media covered the stories. Including the injuries and deaths involving the technology. But, for a while, we were promised a future in which our cars would whisk us around, then drive away to await the next trip. Fully autonomous vehicles, it seemed, were always just a few years away. But it did seem like a good idea at the time and proponents of the tech also claimed to be motivated by a desire to save lives. From 2000 to 2015, motor vehicle deaths per year ranged from a high of 43,005 in 2005 to a low of 32,675 in 2014. In 2015 there were 35,092 motor vehicle deaths and recently the number went back up to around 40,000. Given the high death toll, there is clearly a problem that needs to be solved.

While predictions of the imminent arrival of autonomous vehicles proved overly optimistic, the claim that they would reduce motor vehicle deaths had some plausibility. Autonomous will do not suffer from road rage, exhaustion, intoxication, poor judgment, distraction and other conditions that contribute to the death tolls. Motor vehicle deaths would not be eliminated even if all vehicles were autonomous, but the promised reduction in deaths presented a moral and practical reason to deploy such vehicles. In the face of various challenges and a lack of success, the tech companies seem to have largely moved on from the old toy to the new toy, which is AI. But this might not be a bad thing if driverless cars were aimed at solving the wrong problems and we instead solve the right problems. Discussing this requires going back to a bit of automotive history.

As the number of cars increased in the United States, so did the number of deaths, which was hardly surprising. A contributing factor was the abysmal safety of American cars.  This problem led Ralph Nader to write his classic work, Unsafe at Any Speed. Thanks to Nader and others, the American automobile became much safer and vehicle fatalities decreased. While making cars safer was a good thing, this approach was fundamentally flawed.

Imagine a strange world in which people insist on constantly swinging hammers as they go about their day.  As would be suspected, the hammer swinging would often result in injuries and property damage. Confronted by these harms, solutions are proposed and implemented. People wear ever better helmets and body armor to protect them from wild swings and hammers that slip from peoples’ grasp. Hammers are also regularly redesigned so that they inflict less damage when hitting people and objects.  The Google of that world and other companies start working on autonomous swinging hammers that will be much better than humans at avoiding hitting other people and things. While all these safety improvements would be better than the original situation of unprotected people swinging dangerous hammers around, this approach is fundamentally flawed. After all, if people stopped swinging hammers around, then the problem would be solved.

An easy and obvious reply to my analogy is that using motor vehicles, unlike random hammer swinging, is important. A large part of the American economy is built around the motor vehicle. This includes obvious things like vehicle sales, vehicle maintenance, gasoline sales, road maintenance and so on. It also includes less obvious aspects of the economy that involve the motor vehicle, such as how they contribute to the success of stores like Wal Mart. The economic value of the motor vehicle, it can be argued, provides a justification for accepting the thousands of deaths per year. While it is certainly desirable to reduce these deaths, getting rid of motor vehicles is not a viable economic option. Thus, autonomous vehicles would be a good partial solution to the death problem. Or are they?

One problem is that driverless vehicles are trying to solve the death problem within a system created around human drivers and their wants. This system of lights, signs, turn lanes, crosswalks and such is extremely complicated and presents difficult engineering and programing problems. It would seem to have made more sense to use the resources that were poured into autonomous vehicles to develop a better and safer transportation system that does not center around a bad idea: the individual motor vehicle operating within a complicated system. On this view, autonomous vehicles are solving an unnecessary problem: they are merely better hammers.

My reasoning can be countered in a couple ways. One is to repeat the economic argumen: autonomous vehicles preserve the individual motor vehicle that is economically critical while being likely to reduce the death tax vehicles impose. A second approach is to argue the cost of creating a new transportation system would be far more than the cost of developing autonomous vehicles that can operate within the existing system. This assumes, of course, that the cash dumped on this technology will eventually pay off.

A third approach is to argue that autonomous vehicles could be a step towards a new transportation system. People often need a gradual adjustment to major changes and autonomous vehicles would allow a gradual transition from distracted human drivers to autonomous vehicles operating with the distracted humans to a transportation infrastructure rebuilt entirely around autonomous vehicles (perhaps with a completely distinct system for walkers, bikers and runners). Going back to the hammer analogy, the self-swinging hammer would reduce hammer injuries and could allow a transition to be made away from hammer swinging altogether.

While this has some appeal, it still makes more sense to stop swinging hammers. If the goal is to reduce traffic deaths and injuries, then investing in better public transportation, safer streets, and a move away from car-centric cities would have been the rational choice. For the most part it seems that tech companies and investors have moved away from solving the transportation problem and are now focused on AI. While the driverless car was a very narrow type of AI focused on driving vehicles and supposedly aimed at increasing safety and convenience, the new AI is broader (they are trying to jam it into almost everything that has a chip) and is supposed to be aimed at solving a vast range of problems. Given the apparent failure of driverless cars, we should consider there will be a similar outcome with this broader AI. It is also reasonable to expect that once the current AI bubble bursts, the next bubble will float over the horizon. This is not to deny that some of what people call AI is useful, but that we need to keep in mind that the tech companies seem to often focus on solving unnecessary problems rather than removing these problems.