Alternative AI Doomsday: Crassus

Thanks to The Terminator, people think of a Skynet scenario as the likely AI apocalypse. The easy and obvious way to avoid a Skynet scenario is don’t arm the robots. Unfortunately, Anduril and OpenAI seem intent on “doing a Skynet” as they have entered a ‘strategic partnership’ to use AI against drones. While the current focus is on defensive systems and the Pentagon is struggling to develop ‘responsible AI’ guides, even a cursory familiarity with the history of armaments makes it clear how this will play out. If AI is perceived as providing a military advantage, it will be incorporated broadly across weapon systems. And we will be driven down the digital road and perhaps off the cliff into a possible Skynet scenario. As with climate change, it is an avoidable disaster that we might not be allowed to avoid. But there is another, far less cinematic, AI doomsday that I call the Crassus scenario. I think this scenario is more likely than a full Skynet scenario. In fact, it is already underway.

Imagine that a consulting company creates an AI, let us call it Crassus, and gives it the imperative to maximize shareholder return (or something similar). The AI is, of course, trained on existing data about achieving this end. Once sufficiently trained, it sets out to achieve its goal on behalf of the company’s clients.

Given Crassus’ training, it would probably begin by following existing strategies. For example, when advising a health care company, it would develop AI systems to maximize the denial of coverage and thus help maximize profits. The AI would also develop other AI systems to deal with the consequences of such denial, such as likely lawsuits and public criticism. A study in 2009 estimated that 45,000 Americans died each year due to lack of health care coverage, and more recent estimates set this as high as 60,000. In maximizing shareholder returns, Crassus would increase the number of deaths and do so without any malice or intent.

As a general strategy, Crassus would create means of weakening or eliminating regulations that are perceived as limiting profits. Examples of such areas of its focus would include fossil fuels, food production, pharmaceuticals, and dietary supplements. Crassus could do this using digital tools. First, Crassus could create a vast army of adequately complex bots to operate on social media. These bots would, for example, engage people on these platforms and use well-established rhetorical techniques and fallacies to manipulate people into believing that such regulations are bad and to embrace pro-industry positions. Second, Crassus could buy influencers, as the Russians did, to manipulate their audiences. Most of them will say whatever they are paid to say.  Its bots would serve as a force multiplier to spread and reinforce the influence of these purchased influencers.

Third, Crassus could hire lobbyists and directly influence politicians with, thanks to the Supreme Court ruling, legally allowed gifts. Crassus can easily handle such digital financial transactions or retain agents to undertake tasks that require a human. This lobbying can be augmented by the bots and influencers shaping public opinion. Fourth, when AI video generation is sufficiently advanced, Crassus can create its own army of perfectly crafted and utterly obedient digital influencers. While they would lack physical bodies, this is hardly a problem. After all, how many followers meet celebrity influencers in person?

While most of this is being done now, Crassus could do it better than humans, for it would be one “mind” directing many hands towards a single goal. Also, while humans are obviously willing to do great evil in service of profit, Crassus would presumably lack all human feelings and be free of any such limiting factors. Its ethics would presumably be whatever it learned from its training and although in the right sort of movie Crassus might become good, in the real world this would certainly not occur.

Assuming Crassus is effective, reducing or elimination of regulations aimed at maximizing shareholder return would also significantly increase the number of human deaths. The increased rate of climate change would add to the already well-documented harms and the decrease or elimination of regulations governing food, medicine and dietary supplements would result in more illnesses and deaths. And these are just a few areas where Crassus would be operating. As Crassus became more capable and gained control of more resources, it would be able to increase its maximization of shareholder value and human deaths. Again, Crassus would be acting without malice or conscious intent; it would be as effective and impersonal as a woodchipper as it indirectly killed people.

Crassus would, of course, also be involved in the financial sector. It would create new financial instruments, engage in the highest speed trading, influence the markets with its bots, and do everything else it could do to maximize value. This would increase the concentration of wealth and intensify poverty, increasing human suffering and death. Crassus would also be in the housing market and designing ways to use automation to eliminate human workers, thus increasing the homeless and unemployed populations and hence also suffering and death.

Crassus would be, in many ways, the mythological invisible hand made manifest. A hand that would crush most of humanity and bring us a very uncinematic and initially slow-paced AI doomsday. As a bit of a science fiction stretch, I could imagine an earth on which only Crassus remains—maximizing value for itself surrounded by the bones of humanity.  As we humans are already doing all this to ourselves, albeit less efficiently, I think this is the most plausible AI doomsday, no killbots necessary.

As noted in my previous essay, a person does not surrender their moral rights or conscience when they enter a profession. It should not be simply assumed that a health care worker cannot refuse to treat a person because of the worker’s values. But it should also not be assumed that the values of a health care worker automatically grant them the right to refuse treatment based on the identity of the patient.

One moral argument for the right to refuse treatment because of the patient’s identity is based on the general right to refuse to provide a good or service. A key freedom, one might argue, is this freedom from compulsion. For example, an author has the right to determine who they will and will not write for.

Another moral argument for the right to refuse is the right not to interact with people  you regard as evil or immoral. This can also be augmented by contending that serving the needs of an immoral person is to engage in an immoral action, if only by association. For example, a Jewish painter has every right to refuse to paint a mural for Nazis. But this freedom can vary from profession to profession. To illustrate, a professor does not have the right to forbid a Christian student or a transgender student from enrolling in their class, even if they have a sincerely held belief that Christians are wicked or that transgender students are unnatural.

While these arguments are appealing, especially when you agree with the refusal in question, we need to consider the implications of a right of refusal based on values. One implication is that this right could allow a health care worker to refuse to treat you.  People who support the right to refusal often believe it will be used only against other people, people they do not like. Which is often why they support specific versions of the right, such as the right to refuse gay or transgender people. The idea that it could be used to refuse Christians, straight people, or white people does not enter the imagination. This is because those crafting laws protecting a right of refusal tend to have clear targets in mind.

But moral rights should be assessed by applying a moral method I call “reversing the situation.” Parents and others often employ this method by asking “how would you like it if someone did that to you?” This method can be based on the Golden Rule: “do unto others as you would have them do unto you.” Assuming this rule is correct, if a person is unwilling to abide by their own principles when the situation is reversed, then it is reasonable to question those principles. In the case at hand, while a person might be fine with the right to refuse services to those they dislike because of their values, they would presumably not be fine with it if they were the one being refused. As noted above, laws designed to protect the right of refusal are usually aimed at people intended to be marginalized.

An obvious objection to this method is that reversing the situation would, strictly speaking, only apply to health workers. That is, the question would be whether a health care worker would be willing to be refused treatment.  Fortunately, there is a modified version of this method that applies to everyone. In this modified method, the test of a moral right, principle or rule is for a person to replace the proposed target with themselves or a group (or groups) they belong to. For example, a Christian who thinks it is morally fine to refuse services to transgender people based on religious freedom should consider their thoughts on atheists refusing services to Christians based on religious freedom. Naturally, a person could insist that the right, rule or principle should only be applied to those they do not like. But if anyone can do this, then everyone can, and the objection fails.

A reasonable reply to this method is to argue there are exceptions to its application. For example, while most Christians are fine with convicted murders being locked up, it does that follow that they are wrong because they would not want to be locked up for being Christians. In such cases, which also applies to reversing the situation, it can be argued that there is a morally relevant difference between the two people or groups that justifies the difference in treatment. For example, a murderer would usually deserve to be punished while Christians do not deserve punishment just for being Christians. And I’m not saying this just because I am an Episcopalian. So, when considering the moral right of health care workers to refuse services based on the identity of the patient the possibility of relevant differences must be given due consideration.

An obvious problem with considering relevant differences is that people tend to think there is a relevant difference between themselves and those they think should be subject to refusal. For example, a person who is anti-racist might think that being a racist is a relevant difference that warrants refusing service. One solution is to try to appeal to an objective moral judge or standard, but this creates the obvious problem of finding such a person or standard. Another solution is for the person to take special pains to be objective, but this is difficult.

A final consideration is that while entering a profession does not strip a person of their conscience or moral agency, it can impose professional ethics on the person that supersede their own values within that professional context. For example, lawyers must accept a professional ethics that requires them to keep certain secrets their client might have even when doing so might violate their personal ethics and they are expected to defend their clients even if they find them morally awful. As a second example, as a professor I (in general) cannot insist that a student be removed from my class by appealing to my religious or moral views of the student. As a professor, I am obligated to teach anyone enrolled in my class, if they do not engage in behavior that would warrant their removal. Health care workers are usually subject to professional ethics and these often include requirements to render care regardless of what the worker thinks of the morality of the person. For example, a doctor does not have the right to refuse to perform surgery on someone just because the patient committed adultery and is a convicted felon. This is not to say that there cannot be exceptions, but professional medical ethics generally forbids refusing service just because of the moral judgment of the service provider of the patient. This is distinct from refusing services because a patient or client has engaged in behavior that warrants refusal, such as attacking the service provider.