Scientific researchers often believe that technology is insulated from racial bias. This assumption is false, and blind trust in the infallibility of medical technology is perpetuating racism within the healthcare system.
Unexplored racial biases complicate the generally accepted notion that medical technology plays a purely constructive role in preserving health and well-being.
Wearable healthcare technology devices rely on digital biomarkers to continually collect data that can be used to make predictions of health outcomes, playing a vital role in determining the course of treatment for the over 57 million Americans who rely on them. The most commonly-employed device is the wearable optical heart rate (HR) monitor, which relies on photoplethysmography (PPG), the most common noninvasive optical monitoring technique to detect blood circulation. However, wearable technology can also monitor blood glucose levels and heart rate variability. The applications of this technology are clear—healthcare providers can gain valuable data non-invasively, lowering costs and increasing access.
Unfortunately, these seemingly innocuous devices are not what they seem.
A comprehensive analysis conducted by the researchers at Duke University concluded that “inaccurate PPG HR measurements occur up to 15% more frequently in dark skin as compared to light skin, likely because darker skin contains more melanin which absorbs more green light than lighter skin.” This problem is not confined to wearable devices. A study conducted by researchers at the University of Michigan illustrated that pulse oximeters are three times more likely to miss low oxygen levels in Black patients than in white ones. Pulse oximeters use light transmitted through skin and tissue to measure the level of oxygen in a user’s blood. The darker the skin of the patient, the higher the likelihood the oximeter will miss low levels of oxygen. This shortcoming has very real and deadly implications. During the COVID-19 pandemic, inaccuracy of oximeters resulted in patients being turned away from hospitals and denied higher levels of care by incorrectly reporting oxygen levels were higher than they really were.
With the use of these technologies only increasing, these systemic problems may just represent the tip of the iceberg. The growing reliance on predictive algorithms to power various facets of the healthcare system and treatment process engenders more room for bias. A study published by Science in 2019 made waves with its conclusion that upon a comprehensive review of data from U.S. hospitals, “people who self-identified as black were generally assigned lower risk scores than equally sick white people. As a result, the black people were less likely to be referred to the programs that provide more-personalized care.” The findings revealed that “only 17.7% of patients that the algorithm assigned to receive extra care were black. The researchers calculate that the proportion would be 46.5% if the algorithm were unbiased.”
These startling results point towards an unignorable pattern. Increasing reliance on seemingly neutral technological tools has the potential to encode, perpetuate, and solidify systemic biases in the healthcare system. The dangerous myth that technological tools are somehow more objective and reliable because of the perceived reduction in risk of “human error” when evaluating outcomes has blinded the scientific community from confronting the truly pervasive nature of discrimination. The process of technological development, from conceptual inception to physical deployment, is very much a human process—and thus not immune to bias.
The biomedical research and development process is in dire need of an overhaul.
Current research methods have proven insufficient in accounting for the ways in which a complex web of social factors play a role in determining health outcomes. This web consists of what scientists refer to as the "social determinants of health.” Having access to education, high quality healthcare, reliable infrastructure, and economic resources color health outcomes, conferring that medical outcomes are based on more than purely biological factors. The problem lies in the convoluted distinction between efficacy and effectiveness. The NCBI defines efficacy as the “performance of an intervention under ideal and controlled circumstances,” which contrasts with effectiveness, the “performance under ‘real-world' conditions.” The two concepts are similar, but the distinction between them has major implications. The danger lies in the fact that even prominent authorities within the scientific establishment not only mistakenly conflate the two but often evaluate technology for efficacy alone, overestimating how it will operate when implemented. A product evaluation analysis for Health Technology Assessments concluded that efficacy data is often assumed to be effectiveness data. The widespread nature of this misconception is concerning—the findings of a study presented to the scientific community at the International Society of Pharmacoeconomics and Outcomes Research conference in Berlin illustrated that “while agencies state they are evaluating efficacy and effectiveness a similar number of times, their evaluations are generally for efficacy.”
We should not accept standards that fail to encompass the social nuances that shape the real world and, therefore, real health outcomes. The danger of unquestioned reliance on biased healthcare tools cannot be overstated—technology uniquely enables the scaling of treatment techniques on a much larger level, exporting the localized methods developed in one healthcare system to millions of people nationwide. This allows existing biases to be inscribed even deeper into the healthcare decision-making process, damning vulnerable communities to poor outcomes for generations to come.
Though the biases present in the development of healthcare technology may have been inadvertent, to neglect them once brought to light would be to actively perpetuate them. Only a major shift within the scientific community will be enough to begin to correct this course, but this won’t happen without rethinking the long-standing and deeply entrenched standards that technological development processes have hinged on for far too long.
It is important that we do not buy into the myth that technology creates a shield from systemic biases. The solution just might be to re-humanize rather than automate the process, development, and implementation of healthcare innovations. As expressed by German pathologist Rudolf Virchow in 1848, “Medicine is a social science, and politics is nothing but medicine on a large scale.”
Meera Sehgal is a first year Political Science and Communications double major at American University. She is a Staff Writer at the America Agora.
Image courtesy "thinkpanama," Creative Commons.