Recently, social media and tech companies acted to de-platform Trump and many other right-wing extremists. This included banning Trump from Twitter and other platforms, purging others from social media, and refusing to host services like Parler. This is a marked change from their willingness to monetize many bad actors. To set the stage for the discussion to follow, I need to categorize the bad actors.
The first group is composed of those who are motivated by ideology and use the internet to recruit, radicalize and redefine “reality.” They also tend to monetize their online activities (such as their YouTube videos). Members of this group are distinguished by having (or at least professing) a belief system. They are bad actors because they hold to morally wrong belief systems, such as racism and fascism and attempt to corrupt others into their evil. They also engage in deception, lying about their real beliefs and making untrue claims about the world. Tech companies were quite happy to allow these people to monetize their online activities—since they got the lion’s share of the income. YouTube provides an excellent example of such video content: right-wing recruitment and “reality” redefining videos fully monetized by Google.
The second group is composed of bad actors who are non-ideological. While they might profess beliefs in their content, they are motivated by profit or the joys of trolling. The same person or group might create content aimed at both the right and the left; they do not care who enables them to profit or who they are trolling. The content often replicates that of the ideologically motivated and this can make it difficult to distinguish between the two groups. After all, a QAnon video might be created by a true Q believer or someone who also creates leftist themed videos to maximize their opportunities for profit. As with the ideological people, the tech companies were happy to make money of these people.
The third group consists of bad state actors; they are acting on behalf of foreign governments such as Russia, China, and North Korea. They can be indistinguishable from the other groups in terms of their content. For example, Russian agents target both the left and the right using the language, memes, and methods of the real ideological groups. The tech companies were, of course, happy to profit of these bad actors—even when they actively undermined the United States, and the tech companies knew this was going on.
The bad state actors were the first to be subject to the purge, although this took a long time. For example, Facebook eventually got around to deciding to start exposing and combating Russian efforts to interfere in the election. While the tech companies professed surprise at what had happened, this was obviously a feigned surprise. It seems likely that the threat of government actions and the concerns of the public caused them to act. And perhaps some concern that the damage they were enabling might cause them some harm.
While the tech companies would sometimes deal with individuals who violated their terms of service, they engaged in a massive purge after Trump incited an insurrection against the United States to overthrow the election. Many of those purged were probably believers but non-ideological profiteers and trolls were no doubt also among the expelled. After all, it is difficult to determine what a person believes and what they are professing in order to make a profit or troll people. But why did these companies wait so long?
One practical reason is money: Trump and his supporters were profitable for these companies and they were reluctant to take any action that would impact the bottom line. While this can be criticized on moral grounds (they enabled great damage to be done in return for their profits) it makes good business sense—at least until it does not. The attack on the capitol might have scared the tech companies and caused them to fear for their profits. While Nazi Germany showed that corporations can profit greatly under fascism, the tech companies probably prefer operating within a less extreme system of government and corporations do very well under stable systems. Fascism tends to be very unstable.
Another practical reason is fear of government action. Republicans generally favor allowing corporations to do as they wish unless what they wish to do has a negative impact on Republican politicians. As such, while the Republicans were in charge, the tech companies tried to avoid provoking them into acting. After it was clear that Trump had lost and the Democrats were in power, the tech companies decided to engage in the purge. Presumably, their leaders believe that this purge will be viewed positively by the Democrats.
A final practical reason is that while public opinion was somewhat negative towards the tech companies, the more people learned about what the tech companies had done (and not done) seems to have shifted public opinion. As such, while the purge resulted in criticism from the right, the companies probably believe that this move will help their public perception with the general population.
From a utilitarian standpoint, this purge seems to be morally right. These bad actors had been doing harm to the United States for years and this response seems to have resulted in some meaningful benefits. One striking example is that harmful misinformation dropped dramatically after Trump was banned from Twitter. While Trump was not the sole source of misinformation, he was apparently the god-emperor after all: the army of disinformation depended on him to provide the lies his followers love. Unfortunately, this ban came far too late and we will never know the death toll caused by Trump’s wicked lies about the pandemic. While dividing up the moral accountability is challenging, some of the blame will fall on social media for amplifying his lies. To use the usual cliché, the tech companies are not heroes and they did too little, too late. In upcoming essays, I will delve into the subject of freedom.