It might seem like woke madness to claim that medical devices can be biased. Are there white supremacist stethoscopes? Misogynistic MRI machines? Extremely racist X-Ray machines? Obviously not, medical devices do not have beliefs or ideologies (yet). But they can still be biased in their accuracy and effectiveness.

One example of a biased device is the pulse oximeter. This device measures blood oxygen by using light. You have probably had one clipped on your finger during a visit to your doctor. Or you might even own one. The bias in this device is that it is three times more likely to not reveal low oxygen levels in dark skinned patients than light skinned patients.  As would be expected, there are other devices that have problems with accuracy when used on people who have darker skins. These are essential sensor biases (or defects). In most cases, these can be addressed by improving the sensors or developing alternative devices. The problem is, to exaggerate a bit, is that most medical technology is made by white men for white men. This is not to claim such biased devices are all cases of intentional racism and misogyny. There is not, one assumes, a conspiracy against women and people of color in this area but there is a bias problem.  In addition to biased hardware, there is also biased software.

Many medical devices use software, and it is often used in medical diagnosis. People are often inclined to think software is unbiased, perhaps because of science fiction tropes about objective and unfeeling machines. While it is true that our current software does not feel or think, bias can make its way into the code. For example, software used to analyze chest x-rays would work less well on women than men if the software was “trained” only on X-rays of men. The movie Prometheus has an excellent fictional example of a gender-biased auto-doc that lacks the software to treat female patients.

These software issues can be addressed by using diverse training groups for software and taking steps to test software for bias by using a diverse testing group. Also, having a more diverse set of people working on such technology would probably also help.

Another factor is analogous to user error, which is user bias. People, unlike devices, do have biases and these can and do impact how they use medical devices and their data. Bias in healthcare is well documented. While overt and conscious racism and sexism are rare, sexism and subtle racism are still problems. Addressing this widespread problem is more challenging than addressing biases in hardware and software. But if we want fair and unbiased healthcare, it is a problem that must be addressed.

As to why these biases should be addressed, this is a matter of ethics. To allow bias to harm patients goes against the fundamental purpose of medicine, which is to heal people. From a utilitarian standpoint, addressing this bias would be the right thing to do: it would create more positive value than negative value. This is because there would be more accurate medical data and better treatment of patients.

In terms of a counterargument, one could contend that addressing bias would increase costs and thus should not be done. There are several easy and obvious replies. One is that the cost increase would be, at worst, minor. For example, testing devices on a more diverse population would not seem meaningfully more expensive than not doing that. Another is that patients and society pay a far greater price in terms of illness and its effects than it would cost to address medical bias. For those focused on the bottom line, workers who are not properly treated can cost corporations some of their profit and ongoing health issues can cost taxpayer money.

One can, of course, advance racist and sexist arguments by professing outrage at “wokeness” attempting to “ruin” medicine by “ramming diversity down throats” or however Fox news would put it. Such “arguments” would be aimed at preserving the harm done to women and people of color, which is an evil thing to do. One might hope that these folks would be hard pressed to turn, for example, pulse oximeters into a battlefront of the culture war. But these are the same folks who professed to lose their minds over Mr. Potato Head and went on a bizarre rampage against a grad school level theory that has been around since the 1970s. They are also the same folks who have gone anti-vax in during a pandemic, encouraging people to buy tickets in the death lottery. But the right thing to do is to choose life.

Big corporations possess incredible economic power and many on the left are critical of how this power is used against people. For example, Amazon is infamous for putting such severe restraints on workers that they sometimes have to urinate in bottles. Thanks to Republicans and pro-corporate Democrats, laws and court rulings (such as Citizens United)  enabled these corporations to translate economic power directly into political power. This is also criticized by many on the left and they note how the United States is an oligarchy rather than a democracy. This political power manifests itself in such things as anti-union laws, de-regulation, and tax breaks. With the re-election of Trump, America has largely abandoned the pretense of being a democracy and rulership has been openly handed to the billionaire class.

In the past, Republicans favored increasing the economic power of corporations and often assisted them in increasing their political power. This might have been partially motivated by their pro-business ideology, but it was certainly motivated by the contributions and benefits they received for advancing these interests.  As such, it seemed odd when Republicans started professing opposition to some corporations. Social media and tech companies seem to be the favorite targets, despite the efforts of their billionaire owners to buy influence with Trump.

While Republicans profess to favor deregulation and embrace the free market, they were very angry about social media and tech companies and claimed  these companies were part of cancel culture.  I do understand why they are so angry. For years, social media companies profited from extremism—including that of the American right and it must have felt like a betrayal when they briefly took steps to counter extremism. While the narrative on the right is that these companies became woke or that out-of-control leftists took control, this was not the case. These companies acted based on pragmatism focused on profit. When Facebook changed its policy once again in response to Trump’s election, that was also pragmatism. Zuckerberg wants to make money and avoid prison.

Just a few years ago, extremism had damaged the brands of these companies, and they were under pressure to do something. There might have been some concern that their enabling extremism had gone too far. While they were accusations that they had gone “woke” their business practices revealed that they are not woke leftists. For example, Amazon is virulently anti-union, and Facebook is hardly a worker’s paradise. And now they are eager to appease Trump, although he has excellent reasons to ensure that they remain afraid of what he might have done to them.

Republicans did have pragmatic reasons to be angry at these social media and tech companies for acting against extremism and enforcing their terms of service. First, a significant percentage of the right’s base consists of active extremists, and they are very useful to Republicans. Second, the Republican party relies heavily on “moderate” racism, sexism, xenophobia, and intolerance as political tools.

One could argue that such people are not racists, they are just very concerned that brown people are illegally entering the United States to commit crimes, steal jobs, exploit social services, vote illegally, spread disease, and replace white Americans.  One problem with these views is that they are not supported by facts. Immigrants are less likely to commit crimes. While the impact of migration on the economy is complicated, the evidence is that there is a positive link between immigration and economic growth. The old racist trope of diseased migrants is untrue; in fact migrants help fight disease. And, of course, the replacement hypothesis is an absurd racist hobgoblin.

Interestingly, Paul Waldman makes a solid case that Republicans want critics to call their policies “racist” and this is part of their plan. As he notes, “…they know that their political success depends on motivating their base through a particular racial narrative…” If Waldman is right, then it can be argued that the tech companies were helping the Republicans at the same time they were hurting them. After all, while the tech companies “purge” of social media did hurt the right, it also handed them a victimization narrative that they exploited to activate their base. With Trump’s re-election, social media and tech companies have essentially surrendered to him, although one might argue that they are happy to go along with him.

In addition to racism, the right also uses disinformation and misinformation in their political battles. As noted in other essays on cancellation, the cancel culture narrative of the right was built largely on disinformation. At best it is based on hyperbole. The right’s response to the pandemic was also an exercise in disinformation and misinformation. And, of course, the biggest disinformation campaign was the big lie about the 2020 presidential election. This lie was the foundation for nationwide efforts to restrict voting access, most famously in Georgia. Since Republicans rely extensively on these tools, it makes sense that they were angry about social media companies “cancelling” their lies and that Trump set out to capture these companies after his re-election. Trump understands the power of propaganda and its critical role in his power.

While the Republicans did so for narrowly selfish reasons, they were right to be critical of the power of the social media and tech companies as these companies present real dangers. As I have argued elsewhere, these companies control most mediums of expression available to the masses. While they are not covered by the First Amendment, their power to limit free expression is concerning as they can effectively silence and amplify as they wish.

Leftists have long argued that this gives them too much power, and the right agreed—at least when it involved their very narrow and selfish interests. But the right wants social media to be a safe space for racism, sexism, xenophobia, misinformation, and disinformation. As such, while there is a very real problem with social media, the solution cannot be to simply let the far right do as they wish as they would simply spread hate and lies to advance their political goals. This is not to say that the left is composed of angels; harmful activity and lies of the left also need to be kept in check while allowing maximum freedom of expression. As always, there must be a balance between the freedom of expression and protecting people from harm.

https://commons.wikimedia.org/wiki/File:MarkZuckerberg-crop.jpg

In sci-fi people upload their minds to machines and, perhaps, gain a form of technological immortality. Because of the obvious analogy to the way computer memory works, it is appealing to take uploading the mind as uploading memories. In fiction, the author decides whether it is the same person or not, but philosophers need to argue this matter.

While the idea of mind uploads might seem a recent thing, philosophers have been considering this possibility for a long time. On excellent example is John Locke.  On his view, a person is their consciousness, and he considered the possibility that this consciousness could be transferred from one soul to another. Locke’s terminology can get a bit confusing since he distinguishes between person, body, soul, and consciousness. But suffice it to say that on his view, you are not your soul or body. But you are your consciousness. Crudely put, this consciousness can be considered to be your memory. As far back as your memory goes, you go. The basis of personal identity is important: for you to achieve technological immortality (or as close as possible) it needs to be you that continues and not just someone like you.

Locke anticipates the science fiction idea of uploading your mind and considers problems that arise if consciousness makes personal identity and could be transferred or copied. His solution seems to be a cheat: he claims that God, in His goodness, would not allow this to happen. But if Locke is right about consciousness being the basis of personal identity and wrong about God not allowing it to be copied, then it would be at least metaphysically possible to upload your mind by copying your memories.

David Hume, an empiricist like Locke, presented an argument by intuition against Locke’s account: people believe that they can extend their identity beyond their memory. That is, I do not think that it was not me just because I forgot something. Rather, I suppose that it was me and I merely forgot. Hume took the view that memory is used to discover personal identity and then went off the rails and declared personal identity to be about grammar rather than philosophy. But even if the memory approach to personal identity fails, there are other options. One simple approach is to cheat a lot and just talk about the mind (whatever it is) being uploaded. The mind would, of course, also need to be the person otherwise it would not be you getting immortality.

Assuming the mind is the person, there are two possibilities: it can be copied/transferred or it cannot. If it cannot, then this sort of technological immortality is impossible.

Suppose that the mind can be copied. If it can be copied once, then there seems to be no reason why it cannot be copied multiple times. The problem is that what serves as the basis of personal identity is supposed to be what makes me who I am and makes me distinct from everyone else. If what is supposed to provide my distinct identity can be duplicated, then it cannot be the basis of my distinct identity. Locke, as noted above, “solves” this problem by divine intervention. However, without this there seems to be no reason why my mind could not be copied many times if it could be copied once. As such, a being might have a copy of my mind, just as it might have a copy of the files from my PC. There seems to be a paradox here: to have technological immortality, then the mind must be copyable. But if it can be copied, then it is not the basis of personal identity and it is not what makes you the person you are, distinct from all other things. So, if your mind can be copied, you are not your mind, and the copy will not be you. It will just be someone like you; a technological doppelganger. If your mind cannot be copied, then there is no technological immortality in the strict sense.  So, for the copy to be you, it would need to possess whatever it is that made you the person you are and what distinguished you from all other things: your personness and your distinctness. But perhaps the basis of identity could be transferred rather than copied.

One interesting possibility is that the mind could be transferred from a biological system to a new technological one. In this case, you would be transferred rather than copied. It would be like handing off a unique item as opposed to creating a copy. In this case you could achieve technological immortality. Your original body might keep living, but if you are transferred whatever that entity is it would no longer be you. It would be like a house you once occupied. This, of course, is analogous to possession: an entity takes over a new body by transferring into it.

As a final possibility, it is worth considering that the Buddha is right: there is no self. In this case, you can never upload yourself because there is no self to upload.

The survival argument for establishing off world colonies has considerable appeal. It begins with a consideration of the threat of extinction. There have been numerous extinction events in the past and there is no reason to think humans are exempt. There are a variety of plausible doomsday scenarios that could cause our extinction, ranging from the classic asteroid strike to the human-made nuclear Armageddon. Less extreme, but still of concern, are disasters that would end our civilization without exterminating us.

In the face of these threats, it can be argued that a rational response is to ensure there is an off-world population of humans that would allow humanity to survive even if the earth were subject to an extinction event. In the less extreme scenarios, the off-world population could preserve civilization and help restore it. These scenarios are all familiar to sci-fi fans.

From a moral standpoint, the argument that we should establish colonies to ensure survival is a utilitarian one. The gist is that while they are expensive, this is offset by the value of increasing the odds that humanity and human civilizations will survive. This sort of ethical reasoning, made famous by J.S. Mill, involves weighing the positive and negative value created by an action. The action that creates the most positive value (factoring the negative) for the beings that count is the right action to take.

The obvious moral counter, which is also utilitarian in nature, is to argue these resources would be better spent increasing our chances of survival on earth. While an obvious concern is climate change, there are many other threats that could be addressed by using resources on earth. The “earth first” argument is often made in terms of the return on investment. For example, spending billions for a moon colony would provide less benefit than spending billions addressing terrestrial threats to survival.

While this is a reasonable moral argument, an obvious counter is that spending on space development need not exclude addressing terrestrial problems. After all, we already expend vast resources on things that do not increase humanity’s odds of surviving (and many that decrease it). There is also the practical fact that buy-in is needed from the upper class that controls the resources, and it is far more likely that the Trump administration would fund a moon base or Mars mission before doing anything to address climate change. As such, while the “the money is better spent on other things” argument is reasonable, it is not an effective practical argument against spending resources towards off-world colonies.

Another reasonable objection is both moral and practical: morally justifying expending vast resources based on the survival argument fails because we lack the technology and resources to create a viable colony intended for survival of the species. While some might use the story of Adam and Eve as an inspiration, creating a viable and self-sustaining colony or even just preserving civilization is incredibly unlikely. The colony would need enough population to be viable and must be able to exist without any assistance from the earth. As such, it would need to grow its own food and produce its own water, air and equipment. Think of how difficult it is for humans to operate in Antarctica; operating a colony on the moon or mars would be vastly more difficult.

A counter is to argue that such a colony is not impossible, although it would require massive investment and perhaps centuries of effort. Which would, of course, take us back to arguments about effective use of resources. It would make more sense, critics would argue, to use those resources improving life on earth.

A third objection is to argue that humans are not suitable for life in an off-world colony. We cannot survive in space or on any of the other worlds in our solar system without life-support. Laying aside concerns about air, food and water, and radiation, there is also gravity. Humans, at least the current model, do not do well living in low gravity.

One counter is to argue that the moon and mars might have enough gravity to make them viable for human habitation. There is also the option of using spin, as in sci-fi, to create “artificial” gravity in orbital habitats. Another counter, which is radical but possible, is to argue we can modify our species to live in such environments through genetic engineering and technological augmentation. Life on earth shows a remarkable ability to adapt to hostile environments and humans could be modified to survive and even thrive in such conditions. Getting into the realm of science fiction, we can imagine radical alterations to humans ranging from complete biological reconstructions to putting human brains into mechanical bodies.

Proposals to modify humans do raise serious questions, including the question of what it is to be human. After all, imagine a modified person who could survive on the surface of the moon just wearing shorts. Would such a person still be human? This raises the concern that going into space for survival might be impossible: if we must cease to be human to survive, then that would be the end of humanity.

One response to this worry is to argue that it is not biology that matters, but some other factors. For example, it could be argued that if the “space people” have cultural and moral ties to the “human people” then the survival of the “space people” would mean the survival of humanity, if not homo sapiens. Of course, the same sort of argument could be made if AI exterminated biological humans; our AI “children” would survive. As a closing objection, there is the classic judgment day problem, one I recall from my first space arguments as a college kid.

The judgment day problem is that God has set judgment day, perhaps as laid out in Revelations. On this view, humanity is perfectly safe on earth until judgment day, because nothing can happen to interfere with it. So, there is no point in expanding off earth for the purpose of survival. There might be other good reasons to expand into space, such as finding aliens to convert or to mine asteroids, but the survival argument would have no weight on such a world view. The challenge is, of course, to prove that this view is correct. The same logic can, of course, be used against doing almost anything: if God has judgment day all planned, there is no sense in coming up with cures for disease or even bothering to try to stay alive at all. That is, the fatalism of this view should be universal.

My overall view is that while the survival argument has merit, it requires taking an extremely long-term view as building a self-sustaining off-world colony would probably take centuries of effort. And there is the problem with surviving long enough for success. As such, a reasonable approach would be to focus on survival on earth while taking steps to expand into space. Of course, the “easiest” solution would be to let AI replace us; AI systems would have little trouble surviving off-world.  

My name is Dr. Michael LaBossiere, and I am reaching out to you on behalf of the CyberPolicy Institute at Florida A&M University (FAMU). Our team of professors, who are fellows with the Institute, have developed a short survey aimed at gathering insights from professionals like yourself in the IT and healthcare sectors regarding healthcare cybersecurity.

The purpose of The Florida A&M University Cyber Policy Institute (Cyπ) is to conduct interdisciplinary research that documents technology’s impact on society and provides leaders with reliable information to make sound policy decisions. Cyπ will help produce faculty and students who will be future experts in many areas of cyber policy. https://www.famu.edu/academics/cypi/index.php

Your expertise and experience are invaluable to us, and we believe that your participation will significantly contribute to our research paper. The survey is designed to be brief and should take no more than ten minutes to complete. Your responses will help us better understand the current security landscape and challenges faced by professionals in your field, ultimately guiding our efforts to develop effective policies and solutions for our paper. We would be happy to share our results with you.

To participate in the survey, please click on the following link: https://qualtricsxmfgpkrztvv.qualtrics.com/jfe/form/SV_8J8gn6SAmkwRO5w

We greatly appreciate your time and input. Should you have any questions or require further information, please do not hesitate to contact us at michael.labossiere@famu.edu

Thank you for your consideration and support.

Best regards,

Dr. Yohn Jairo Parra Bautista, yohn.parrabautista@famu.edu

Dr. Michael C. LaBossiere, michael.labossiere@famu.edu

Dr. Carlos Theran, carlos.theran@famu.edu

Image by Binarysequence

Back in 2019, the Smithsonian did a retrospective in honor of the 40th birthday of the Walkman. While an impressive innovation, the Walkman is a single function device: it only plays cassette tapes. Yet it triggered cries of technological doom that are being echoed today.

While unable to record, the Walkman was used to play mixtapes, and the music industry saw this as a threat. While many awful mixtapes were mixed, the industry somehow survived. With each new technological innovation, the cry of doom echoed across the world again and again. And yet the day the music died has not arrived. As such, we should heed the lesson of the Walkman: dire predictions of doom should be made more cautiously. That said, technology can be a terrible swift sword and the challenge is to sort out what it is likely to slay and what it will spare

The Walkman was also symbolically to insult the youth.  Der Spiegel called it “A technology for a generation with nothing left to say.” But the Walkman generation had a lot to say, and the prophecy of silence did not come true. With the invention of the smart phone and tablet, this same story played out again. And it will happen again with the next innovation.

 Regardless of technology, the youth of today are always claimed to be the worst generation. They are also supposed to lack the virtues that their elders supposedly possessed in their youth. When I was a kid, we didn’t rot our brains wit smart phones. We did it with TV and the Walkman. But if every generation of youth were as terrible as claimed, the elders would also lack virtue since today’s elders are yesterday’s youth. Before claiming that the youth of today are terrible, think back on what your elders said of you.

The Walkman was supposed to “rot” the brains of the youth, just like TV only by using audio. Alan Bloom, the philosopher of doom and gloom, wrote in The Closing of the American Mind about  youth defiled by the walkman, “a pubescent child whose body throbs with orgasmic rhythms.” He predicted that “As long as they have the Walkman on, they cannot hear what the great tradition has to say.” Having grown up during the height of the Walkman era, I can assure readers that the Walkman did not have this effect. In addition to people listening to the classics on tapes, many people read the classic while listening to their Walkman, just as people did with their Victrola, Gramophone, or stereo. The Walkman of today is the smart phone, and the worry is that the youth will be swiping rather than throbbing. But the truth is that the youth do read the classics on screens (and on paper) and that the dire predictions will no more come true now than they have in the past. AI is also being presented as a brain rotting technology, although it is something accessed through existing technology, including phones.

Looking back at the Walkman, there is a law governing the emergence of new entertainment technology and the societal response. It is created, dire predictions are made, it becomes a symbol used when bemoaning how bad the youth are today, and then another generation is born, and new technology emerges. The process repeats itself. The Walkman users were judged to be the “throbbing youth”, now they are the judges. The smartphone kids are growing up to judge their kids, making dire predictions about whatever they think is rotting the brains of the youth.

It is worth considering that technology will be developed that will fulfil these prophecies of doom, that really does degrade, corrupt and isolate the youth. But until then, the cycle will continue.

In 1981 the critically unacclaimed Looker presented the story of a nefarious corporation digitizing and then murdering super models. This was, one assumes, to avoid having to pay them royalties. In many ways, this film was a leader in technology: it was the first commercial film to attempt to create a realistic computer-generated character and the first to use 3D computer shading (beating out Tron). Most importantly, it seems to be the first film to predict a technology for replacing people with digital versions and to predict that it would be used with nefarious intent.

While the technology for creating digital versions of real people is still a work in progress, it is quite good and will continue to get better. While one might think that such creations would require the resources of Hollywood,  the software to create such deep fakes has been readily available for years, thus opening the door to anyone to create their own digital deceits.

As should be expected, the first use of this technology was to “deepfake” the faces of celebrities onto the bodies of porn actors. While obviously of concern to the impacted celebrities, the creation of deepfake celebrity porn is probably the least harmful aspect of deepfakes. Sticking within the realm of porn, deepfakes could be created of normal people in efforts to humiliate them and damage their reputations (and perhaps get them fired). On the other side of the coin, the existence of deepfakes could enable people to claim that real images or videos of them are not real. One can easily imagine cheaters using the deepfake defense and the better deepfakes get, the better the defense. This points to the broad problem with the existence of deepfakes: when the technology is good enough and widespread enough, it will be difficult to tell what is real and what is deepfake. This is the core moral problem with the technology and its potential for abuse is considerable. One obvious misuse is the creation of fake news in the form of videos of events that never occurred and recordings of people saying things they never said.

It can be argued that there are legitimate uses of deepfake style technology, such as movies and video games. This is a reasonable point: if those being digitized provide informed consent, this is just an improved version of the CGI that has long been used to recreate the appearance of actors in movies and video games. However, this argument misses the point: it is not the technology that is the problem, it is the use. To use an analogy, one can defend guns by arguing that there are legitimate uses (such as self-defense, hunting and target shooting) but this does not defend homicides committed with guns. The same holds for deep-fake technology: the technology itself is morally neutral, although it can clearly be used for evil ends. This makes it problematic to control or limit the underlying technology, even if it is possible to do so. It is easy to acquire the software and almost impossible to control access to this technology. Controlling it would be on par with trying to prevent access to pirated movies and software. Because of this, limiting access is not a viable option.

From a philosophical perspective, deepfakes present an epistemic problem worthy of the skeptics. While not on the scale of the problem of the external world (how do we know the allegedly real world is really real?), the problem of deepfakes presents a basic epistemic challenge: how do you know that a video or audio recording is real and not a deepfake? The problem can be seen as having two parts. The first is discerning that a fake is a fake. The second is discerning that the real is real. Fortunately, the goal here is practical in that we do not need epistemic certainty, we just need to be reasonably confident in our judgements. This does raise the problem of sorting out how confident we need to be in each situation, but this is nothing new and law and critical thinking have long addressed the matter of required levels of proof.

On the philosophical side, the old tools of critical thinking will still serve against deepfakes, although awareness of the technology will be essential. For example, if a video appears of Taylor Swift killing cats, then it would be reasonable to conclude that this is a deepfake. Whatever one might think of Taylor Swift, she does not seem to be a cat killer. There is also the general point that deepfakes do not create physical evidence and Life Model Decoys (probably) do not exist.  Naturally, fully addressing the critical thinking needed to address deepfakes goes far beyond the scope of this essay.

On the technological side, there will be an ongoing arms race between software used to create deepfakes and software used to detect them. One concern is that nations will be working hard to both defeat and create deepfakes—so there will be plenty of funding for both. Interestingly, there seems to have been little use of deepfake technology in American politics, perhaps because it was judged to be either unnecessary or too risky to use.

My Amazon Author Page

When people think of an AI doomsday, they usually envision a Skynet scenario in which Terminators try to exterminate humanity. While this would be among the worst outcomes, our assessment of the dangers of AI needs to consider both probability and the severity of harm. Skynet has low probability and high severity. In fact, we could reduce the probability to zero by the simple expedient of not arming robots. Unfortunately, killbots seem irresistible and profitable so the decision has been made to keep Skynet a non-zero possibility. But we should not be distracted from other doomsdays by the shiny allure of Terminators.

The most likely AI doomsday scenario is what could be called the AI burning bubble. Previous bubbles include the Dot-Com bubble that burst in 2000 and the housing bubble that burst in 2008. In 2022 there was a Bitcoin crash that led to serious concerns that “that virtual currency is becoming the largest Ponzi scheme in human history…” Fortunately, that cryptocurrency bubble was relatively small, although there are efforts underway to put it on track to become a bubble big enough to damage the world economy. Unless, of course, an AI bubble bursts first. While AI and crypto are different, they do have some similarities worth considering.

AI might produce a burning bubble, and the burning part refers to environmental damage. Both AI and crypto are incredibly energy intensive. It is estimated that crypto’s energy consumption is .4 to .9% of annual worldwide energy usage and that crypto mining alone produces about 65 megatons of carbon each year. This is likely to increase, at least until the inevitable bursting of the cryptocurrency bubble. It is believed that AI data centers consume 1%-2% of the world’s electricity production and this will almost certainly increase. While there have been some efforts to use renewable energy to power the data centers, their expansion has slowed the planned phaseout of fossil fuels. AI has also become infamous for its water consumption, stressing an already vulnerable system. As the plan is to keep expanding AI, we can expect ever increasing energy and water consumption along with carbon production. This will accelerate climate change, which will not be good for humanity. In addition to consuming energy and water, AI also needs hardware to run on. As with cryptocurrency, companies such as NVIDIA have profited from selling hardware for AI. But manufacturing this hardware has an environmental impact and as it wears out and becomes obsolete it will all become e-waste, most likely ending up in landfills. All of this is bad for humans and the environment.

It can be countered that AI will find a way to solve climate change and one might jokingly say that its recommendation will be to get rid of AI. While certain software has been very useful in addressing climate concerns, it is at best wishful thinking to believe that AI will solve the problem that it is contributing to. It would make more sense to dedicate the billions being pumped into AI to address climate change (and other problems) directly and immediately.

While AI, like crypto, is doing considerable environmental damage, it is also consuming investments. Billions have been poured into AI and there are plans to increase this to trillions. One effect is that financial resources are diverted away from other things, such as repairing America’s failing infrastructure, investing in education, or developing other areas of the economy. While this diverting of resources into a single area raises the usual moral concerns and sayings about eggs and baskets, there is also the economic worry that the bubble is being inflated.

As many have noted, what is happening with AI mirrors past bubbles, most obviously the Dot-Com bubble that burst in 2000. People will, of course, rightfully point out that although the bubble burst, the underlying technology endured, and the internet-based economy is obviously massively profitable. As such, the most likely scenario is that the overvaluation of AI will have a similar outcome. The AI bubble will burst, CEOs will move on to inflating the next bubble, and AI technology will remain, albeit with less hype. On the positive side, the burst might have the effect of reducing energy and water consumption and lowering carbon emissions. But this prediction could be wrong, and the Terminators might get us before the bubble bursts.

https://en.wikipedia.org/wiki/Khan_Noonien_Singh

Science fiction is replete with tales of genetic augmentation making people more human than human. One classic example is Khan, who is introduced in Star Trek’s “Space Seed” episode. In the Star Trek timeline, scientists used genetic engineering and selective breeding to create augmented humans in the hope of creating a better world. Instead, it led to the eugenics wars between normal humans and the augmented. While ordinary humanity won, there are other stories in which humanity is replaced by its creations. While these are fictional tales, genetic modification is real and human augmentation seems inevitable.

In  science-fiction genetic engineering is used to create super humans but there is the question of what the technology could do within the limits of biology. To avoid contaminating the discussion with hyperbole and impossible scenarios, we need to consider likely rather than fantastical scenarios. That said, genetic augmentation could provide meaningful advantages that are not the stuff of comic books. For example, immunity to some diseases would be very advantageous and even modest improvements in mental and physical abilities would be useful. These modest improvements still raise moral concerns.

As would be expected, people do advance the “playing God” and “unnatural” arguments against augmentation. However, given that modern medicine is also “playing God” and “unnatural”, these objections have little merit.  A better approach is to consider what we should be doing, without the dramatic rhetoric of “playing God” or it being “unnatural.”

Since early augmentations will probably be modest, they are of the most immediate moral concern. One major concern is with the fairness of such augmentation. The rich will be able to afford to augment their children, thus giving them even more advantages over other people and this is a frequent subject of science fiction. While this does raise some new concerns because of the augmentation aspect, the core moral problems are ancient as they are all about determining how opportunities should be distributed in society and determining moral rules for competition within a society.

As it stands, American society allows the wealthy to enjoy a multitude of advantages over the lower classes and the Trump administration is unleashing a chaotic storm aimed at increasing this disparity. However, there are moral limits to what people will tolerate and a good example of this was the college admissions scandal. While it is socially acceptable for the wealthy to make donations and use legacy admissions to get their kids into college, outright bribes were condemned. Genetic augmentation should be looked at as just one more factor in the competition between the economic classes and the same basic ethical concerns apply, albeit with the addition of the ethics of genetic modification.

From the standpoint of what we collectively accept, the question is whether augmentation is more like the accepted advantages of the rich, such as buying tutoring and better education for their children or more like the advantages that are condemned, such as outright bribery.

On the face of it, genetic augmentation is like methods already used to improve the children of the upper classes. They get better medical care, better nutrition, better housing, better education, better tutoring, better counseling and so on. In a real sense, they are already augmented relative to the lower classes. While these advantages are not earned by the children, they do improve their abilities and enable them to have a better chance to succeed because of their enhanced abilities. Genetic augmentation is the same: while they do not earn their augmentation, it would make them objectively better than they would be otherwise, and it would provide another edge over the lower economic classes. The augmented people would, in most cases, get the best opportunities. As such, if the current system is morally acceptable, then genetic augmentation would be acceptable as well.

As would be expected, those who see the current system as immoral because of its unfairness would also think that genetic augmentation would be unfair. One approach to addressing the unfairness of augmentation would be banning the technology, which was the solution in the Star Trek universe. A moral concern with this approach is that it would deny humanity a chance to improve and could be seen as like banning parents from hiring tutors for their kids. Another approach would be to require that all children have the opportunity for enhancement. This would be analogous to ensuring that public resources are distributed equitably for K-12 education, so that everyone is better off.

If one takes the professed American values of fair competition and equality of opportunity seriously (which we obviously should not), then such augments should be treated like public education and available to all citizens. If one seeks to perpetuate the advantages of the upper classes, then one would insist that such augmentations should be available to those who can pay. That is, the upper classes.

The above discussion does, I hasten to note, set aside concerns specific to augmentation itself as my focus has been on the moral question of fairness and distribution of opportunities.

8

Asteroid and lunar mining are the stuff of science fiction, but there are those working to make them a reality.  While the idea of space mining might seem far-fetched, asteroids and the moon contain useful resources. While the idea of space mining probably brings to mind images of belters extracting gold, one of the most valuable resources in space is water. Though cheap and plentiful on earth, it is very expensive to transport it into space. While the most obvious use of space water is for human consumption, it also provides raw material for fuels and many uses in industry. Naturally, miners will also seek minerals, metals and other resources.

My love of science fiction, especially GDW’s classic role playing game Traveller, makes me like the idea of space mining. For me, that is part of the future we were promised. But, as a philosopher, I have ethical concerns.

As with any sort of mining, two moral concerns are the impact on the environment and the impact on humans. Terrestrial mining has been devastating to the environment. This includes the direct damage caused by extracting the resources and the secondary effects, such as lasting chemical contamination. These environmental impacts in turn impact human populations.  These impacts can include directly killing people (a failed retaining wall that causes drowning deaths) and indirectly harming people (such as contamination of the water supply). As such, mining on earth involves serious moral concerns. In contrast, space mining would seem to avoid these problems.

Unlike the heavily populated planet earth, asteroids and the moon are lifeless rocks in space. As such, they do not seem to have any ecosystems to damage. While the asteroids that are mined will often be destroyed in the process, it is difficult to argue that destroying an asteroid would be wrong based on environmental concerns. While destroying the moon would be bad, mining operations there would seem to be morally acceptable because one could argue that there is no environment to worry about.

Since space mining takes place in space, the human population of earth will (probably) be safely away from any side effects of mining. It is worth noting that should humans colonize the moon or asteroids, then space mining could harm these populations. But, for the foreseeable future, there will be no humans living near the mining areas. Because of the lack of harm, space mining would seem to be morally acceptable.

It might be objected that asteroids and the moon be left unmined despite the absence of life and ecosystems. The challenge is making the case why mining lifeless rocks would be wrong. One possible approach is to contend that the asteroids and the moon have rights that would make mining them wrong. However, rocks do not seem to be the sort of thing that can have rights. Another approach is to argue that people who care about asteroids and the moon would be harmed. While I am open to arguments that would grant these rocks protection from mining, the burden of proof is on those who wish to make this claim.

Thus, it would seem there are not any reasonable moral arguments against the mining of the asteroids based on environmental concerns or potential harm to humans. That could, of course, change if ecosystems were found on asteroids or if it turned out that the asteroids performed an important role in the solar system that affected terrestrial ecosystems. While this result favors space mining, the moral concerns are not limited to environmental harms.

There are, as would be suspected, the usual moral concerns about the working conditions and pay of space miners. Of course, these concerns are not specific to space mining and going into labor ethics would take this short essay too far afield. However, the situation in space does make the ethics of ownership relevant.

From a moral standpoint, the ethics of who can rightfully claim ownership of asteroids and the moon is of great concern. From a practical standpoint, it is reasonable to expect that matters will play out as usual: those with guns and money will decide who owns the space rocks. If it follows the usual pattern, corporations will end up owning the rocks and will exploit them. But how things will probably play out does not determine how they should play out. Fortunately, philosophers considered this sort of situation long ago,

While past philosophers probably did not give much thought to space mining, asteroids (and the moon) fit into the state of nature scenarios envisioned by thinkers like Hobbes and Locke.  They are resources in abundance with no effective authority over them. Naturally, the authorities can do things on earth to people involved with activities in space, but it will be quite some time before there are space police (though we have a Space Force).

Since there are no rightful owners (or, alternatively, we are all potentially rightful owners), it is tempting to claim the resources are there for the taking. That is, the resources belong to whoever, in Locke’s terms, mixes their labor with it and makes it their own (or more likely their employer’s own). This does have a certain appeal. After all, if me and my fellows in Mike’s Space Mining construct a robot ship that flies out to asteroid and mines it, we seem to have earned the right to those resources through our efforts. Before our ship mined it for water and metal, these valuable resources were just drifting in space, surrounded by rock. It would thus seem to follow that we would have the right to grab as many asteroids as we can. To be fair, our competitors would have the same right. This would be a rock rush in space.

But Locke also has his proviso: those who take from the common resources must leave as much and as good for others. While this proviso has been grotesquely violated on earth, the asteroids provide us with a new opportunity to consider how to share (or not) these abundant resources.

It can be argued that there is no obligation to leave as much and as good for others in space and that things should be on a strict first grab, first get approach. After all, the people who get their equipment into space would have done the work (or put up the money) and hence (as argued above) be entitled to all they can grab and use or sell. Other people are free to grab what they can, if they have access to the resources needed to reach and mine the asteroids. Naturally, the folks who lack the resources to compete will end up, as they always do, out of luck. 

While this has a certain selfish appeal, a case can be made for sharing. One obvious reason is that the people who reach the asteroids first to mine them did not create the ability to do so out of nothing. After all, reaching the asteroids will be the result of centuries of human civilization that made such technology possible. As such, there would seem to be a general debt owed to humanity and paying this off would involve contributing to the general good of humanity. Naturally, this line of reasoning can be countered by arguing that successful miners will benefit humanity when their profits “trickle down” from space. It could also be argued that the idea of a debt to past generations is absurd as is the notion of the general good of humanity. This is, of course, the view that the selfish and ungrateful would embrace.

Second, there is concern for not only the people who are alive today but also for the people to be. To use an analogy, think of a buffet line at a party. The fact that I am first in line does not give me the right to devour everything I can stuff into my snack port. If I did that at a party, I would be rightly seen as a terrible person. It also does not give me the right to grab whatever I cannot eat so I can sell it to those who have the misfortune to be behind me in line. Again, if I did that, I would be rightly regarded as a horrible person who should be banned from parties. So, these resources should be treated in a similar manner, namely fairly and with some concern for those who are behind the first people in line. As such, the mining of space resources should include limits aimed at benefiting those who do not happen to get there first to grab the goodies. To be fair, behavior that would get a person kicked out of a party is often lauded in the business world, for that realm normalizes and lauds awful behavior.

In closing, it should be noted that space is really big. Because of this, it could be argued that there are plenty of resources out there, so it is morally acceptable for the people who get there first to grab as much as they can. After all, no matter how much they grab, there will be plenty left. While this does have some appeal, there is an obvious problem: it is not just a matter of how much is out there, but how much can be reached at this time. Going back to the buffet analogy, if I stuffed myself with as much as I could grab and started trying to sell the rest to others behind me in line, then yelling “there are other buffets out there” would not get me off the moral hook.