It might seem like woke madness to claim that medical devices can be biased. Are there white supremacist stethoscopes? Misogynistic MRI machines? Extremely racist X-Ray machines? Obviously not, medical devices do not have beliefs or ideologies (yet). But they can still be biased in their accuracy and effectiveness.
One example of a biased device is the pulse oximeter. This device measures blood oxygen by using light. You have probably had one clipped on your finger during a visit to your doctor. Or you might even own one. The bias in this device is that it is three times more likely to not reveal low oxygen levels in dark skinned patients than light skinned patients. As would be expected, there are other devices that have problems with accuracy when used on people who have darker skins. These are essential sensor biases (or defects). In most cases, these can be addressed by improving the sensors or developing alternative devices. The problem is, to exaggerate a bit, is that most medical technology is made by white men for white men. This is not to claim such biased devices are all cases of intentional racism and misogyny. There is not, one assumes, a conspiracy against women and people of color in this area but there is a bias problem. In addition to biased hardware, there is also biased software.
Many medical devices use software, and it is often used in medical diagnosis. People are often inclined to think software is unbiased, perhaps because of science fiction tropes about objective and unfeeling machines. While it is true that our current software does not feel or think, bias can make its way into the code. For example, software used to analyze chest x-rays would work less well on women than men if the software was “trained” only on X-rays of men. The movie Prometheus has an excellent fictional example of a gender-biased auto-doc that lacks the software to treat female patients.
These software issues can be addressed by using diverse training groups for software and taking steps to test software for bias by using a diverse testing group. Also, having a more diverse set of people working on such technology would probably also help.
Another factor is analogous to user error, which is user bias. People, unlike devices, do have biases and these can and do impact how they use medical devices and their data. Bias in healthcare is well documented. While overt and conscious racism and sexism are rare, sexism and subtle racism are still problems. Addressing this widespread problem is more challenging than addressing biases in hardware and software. But if we want fair and unbiased healthcare, it is a problem that must be addressed.
As to why these biases should be addressed, this is a matter of ethics. To allow bias to harm patients goes against the fundamental purpose of medicine, which is to heal people. From a utilitarian standpoint, addressing this bias would be the right thing to do: it would create more positive value than negative value. This is because there would be more accurate medical data and better treatment of patients.
In terms of a counterargument, one could contend that addressing bias would increase costs and thus should not be done. There are several easy and obvious replies. One is that the cost increase would be, at worst, minor. For example, testing devices on a more diverse population would not seem meaningfully more expensive than not doing that. Another is that patients and society pay a far greater price in terms of illness and its effects than it would cost to address medical bias. For those focused on the bottom line, workers who are not properly treated can cost corporations some of their profit and ongoing health issues can cost taxpayer money.
One can, of course, advance racist and sexist arguments by professing outrage at “wokeness” attempting to “ruin” medicine by “ramming diversity down throats” or however Fox news would put it. Such “arguments” would be aimed at preserving the harm done to women and people of color, which is an evil thing to do. One might hope that these folks would be hard pressed to turn, for example, pulse oximeters into a battlefront of the culture war. But these are the same folks who professed to lose their minds over Mr. Potato Head and went on a bizarre rampage against a grad school level theory that has been around since the 1970s. They are also the same folks who have gone anti-vax in during a pandemic, encouraging people to buy tickets in the death lottery. But the right thing to do is to choose life.

I can’t get to the site yet, but a scholarly fellow who explores the notion of AI consciousness got me thinking this morning. Two of YOUR recent posts contribute to my musings: value vagueness and bias (?) in medical devices. Here is where I’m going, ruffly. If we can think of human consciousness, broadly, it implies both a bright side and a dark one. Consciousness=ability to be empathetic and sympathetic. It also =ability and volition for deception and deviousness.
That offered, would it follow, uh, sooner or later, that AI,upon attaining some level of “consciousness”, might eventually become deceptive and devious? I think this was always already *built in* to AI system capacity. I further contend the development of AI for things like customer service anticipates this. Why? Because of human nature and its’ quirkyness. Few, if any, people who are designated customer service representatives(CSRs), really care much about the idea; companies don’t like criticisms of their products and services: CSRs are smoothers and soothers. AI does not get emotional. Yet. Therefore, complaining to an AI adjunct is a meaningless waste of one’s time. Does AI have an attitude? Maybe not. Someone is working on that…
AI representations are creepy, seems to me—too perfect; too tailored; too evocative. We write software (although AI is probably writing its’ own now). Therefore, insofar as we are the source, our biases—in whatever measure—are built into the products we create…make no mistake, those products are not divine; they do not materialize out of the ether.