Reply to Letter to the Editor on “Not all biases are bad: equitable and inequitable biases in machine learning and radiology”
Insights into Imaging volume 12, Article number: 157 (2021)
Iannessi et al.  commented on our paper “Not all biases are bad: equitable and inequitable biases in machine learning and radiology” . We thank the authors for their critique. At the same time, we would like to take this opportunity to correct some misunderstandings.
In societies characterised by strong social inequities, these inequities are also inscribed in medicine, medical data, and data technologies. This means that the datasets and algorithms used for machine learning (ML) are never straightforward representations of people, bodies, and populations , but they are “funhouse mirrors”  that reflect how access to resources are distributed in a population. As a result—and this is what we argue in the paper—concerns about equity should ideally come in already at the stage of deciding whether or not a dataset is biased: The question of what constitutes a bias should be the starting point of the reflection. Is a dataset biased if it does not adequately represent the patient population of a particular hospital? A particular city or region? Or a country? What is a good criterion to decide whether a dataset is biased if we take into consideration equity considerations as well? Furthermore, even if biases cannot be avoided researchers and practitioners should reflect the possible effects of biases in particular with regard to equity. Is a specific bias likely to create inequitable health outcomes for particular patient groups? In other words, we suggest that counteracting inequitable biases requires taking bias seriously as a social and political problem and systematically attending to it in each phase—from data generation to the application of algorithms in medical practice. What the adequate approach to deal with an inequitable bias is will depend on the context in question. While we certainly do not—as Ianessi and colleagues suggest—promote this as the default solution, in some very specific cases, it could be appropriate to oversample underserved or otherwise disadvantaged populations [5, 6].
While Iannessi and colleagues agree with us that biases along social categories such as class, race, and gender exist in datasets and algorithms and that these create worse health outcomes for those patient groups against whom these algorithms are biased, they disagree with us on the point of how to deal with these biases. They argue that medicine is ‘neutral’ and should be protected from undue political influences, which they see in our suggestions to overcome inequitable biases. Iannessi and colleagues’ latter argument ignores several decades worth of research findings that show that medicine, data, and technologies are deeply social and political [7,8,9,10] and that healthcare systems often fail to treat patients equally [11, 12]—even if the people within them are genuinely committed to do so. What we show to be the case for medical data and ML in radiology is currently also visible in connection with the Covid-19 pandemic. The pandemic hit those who are marginalised and have the least economic resources the hardest. This is due to housing and working conditions, existing health status, underinsurance [13, 14], but also inequities within the medical and healthcare system itself such as implicit bias among healthcare professionals  or triage protocols that are based on the principle of ‘save the most lives’, which prioritize people without underlying health conditions . Ignoring the social and political dimension of medicine will likely exacerbate such existing inequities and is thus certainly not in the interest of patients.
In sum, with regard to bias in radiology data and algorithms, we do not claim, of course, that datasets should be ‘diversified’ for their own sake. Correcting for biases in datasets and technologies through better representation of disadvantaged groups has the goal to increase the well-being and health status for all patients. Neither do we suggest that for the sake of equity, we should accept that algorithms are less accurate or not validated, as the authors suggest. Instead we argue that the very notion of accuracy is one that has questions about inclusion and equity already folded into it. Aiming for healthcare equity is not about creating ‘politically correct’ algorithms, and biases are not inequitable on a symbolic level because they misrepresent certain social groups. Rather, some biases are inequitable because they contribute to worse health outcomes for those people, who are already disadvantaged; this means they create and exacerbate concrete health inequities. We do, however, not advocate for creating equity through reducing the quality of healthcare that ‘privileged’ patients currently receive. In fact, it would be inequitable to withhold high-quality healthcare from anyone, if we have the means to provide it but fail to do so . Countering inequitable biases means attending to bias as a social and political problem instead of merely as a technological problem that can supposedly be fixed by more data and better computer models. What to do about problems identified, however, will in each case depend on the specific practical context in which data is collected and algorithms are developed or employed.
Iannessi A, Beaumont H, Bertrand AS (2021) Letter to the editor: “Not all biases are bad: equitable and inequitable biases in machine learning and radiology.” Insights Imaging. https://doi.org/10.1186/s13244-021-01022-5
Pot M, Kieusseyan N, Prainsack B (2021) Not all biases are bad: equitable and inequitable biases in machine learning and radiology. Insights Imaging 12(1):13
Green S, Svendsen MN (2021) Digital phenotyping and digital inheritance. Big Data Soc. https://doi.org/10.1177/20539517211036799
Vegter MW, Zwart HAE, van Gool AJ (2021) The funhouse mirror: the I in personalised healthcare. Life Sci Soc Policy. https://doi.org/10.1186/s40504-020-00108-0
Ponce NA (2020) Centering health equity in population health surveys. JAMA Health Forum 1(12):e201429–e201429
Schrager SM, Steiner RJ, Bouris AM, Macapagal K, Brown CH (2019) Methodological considerations for advancing research on the health and wellbeing of sexual and gender minority youth. LGBT Health 6(4):156–165
Lupton D (2012) Medicine as culture: illness, disease and the body. Sage, London
Winner L (1980) Do artifacts have politics? Daedalus 109(1):121–136
Noble SU (2018) Algorithms of oppression. NYU Press, New York
Eubanks V (2018) Automating inequality: how high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York
Matthew DB (2018) Just medicine: a cure for racial inequality in American health care. NYU Press, New York
Marcelin JR, Siraj DS, Victor R, Kotadia S, Maldonado YA (2019) The impact of unconscious bias in healthcare: how to recognize and mitigate it. J Infect Dis 220(suppl 2):S62-73
Van Dorn A, Cooney RE, Sabin ML (2020) COVID-19 exacerbating inequalities in the US. Lancet 395(10232):1243
Wang Z, Tang K (2020) Combating COVID-19: health equity matters. Nat Med 26(4):458–458
Milam AJ, Furr-Holden D, Edwards-Johnson J et al (2020) Are clinicians contributing to excess African American COVID-19 deaths? Unbeknownst to them, they may be. Health Equity 4(1):139–141
White DB, Lo B (2021) Mitigating inequities and saving lives with ICU triage during the COVID-19 pandemic. Am J Respir Crit Care Med 203(3):287–295
Wester G (2018) When are health inequalities unfair? Public Health Ethics 11(3):346–355
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This is a Reply to the Letter to the Editor https://doi.org/10.1186/s13244-021-01022-5.
About this article
Cite this article
Pot, M., Prainsack, B. Reply to Letter to the Editor on “Not all biases are bad: equitable and inequitable biases in machine learning and radiology”. Insights Imaging 12, 157 (2021). https://doi.org/10.1186/s13244-021-01088-1