Skip to main content

Reply to Letter to the Editor on “Not all biases are bad: equitable and inequitable biases in machine learning and radiology”

The Original Article was published on 16 June 2021

Introduction

Iannessi et al. [1] commented on our paper “Not all biases are bad: equitable and inequitable biases in machine learning and radiology” [2]. We thank the authors for their critique. At the same time, we would like to take this opportunity to correct some misunderstandings.

In societies characterised by strong social inequities, these inequities are also inscribed in medicine, medical data, and data technologies. This means that the datasets and algorithms used for machine learning (ML) are never straightforward representations of people, bodies, and populations [3], but they are “funhouse mirrors” [4] that reflect how access to resources are distributed in a population. As a result—and this is what we argue in the paper—concerns about equity should ideally come in already at the stage of deciding whether or not a dataset is biased: The question of what constitutes a bias should be the starting point of the reflection. Is a dataset biased if it does not adequately represent the patient population of a particular hospital? A particular city or region? Or a country? What is a good criterion to decide whether a dataset is biased if we take into consideration equity considerations as well? Furthermore, even if biases cannot be avoided researchers and practitioners should reflect the possible effects of biases in particular with regard to equity. Is a specific bias likely to create inequitable health outcomes for particular patient groups? In other words, we suggest that counteracting inequitable biases requires taking bias seriously as a social and political problem and systematically attending to it in each phase—from data generation to the application of algorithms in medical practice. What the adequate approach to deal with an inequitable bias is will depend on the context in question. While we certainly do not—as Ianessi and colleagues suggest—promote this as the default solution, in some very specific cases, it could be appropriate to oversample underserved or otherwise disadvantaged populations [5, 6].

While Iannessi and colleagues agree with us that biases along social categories such as class, race, and gender exist in datasets and algorithms and that these create worse health outcomes for those patient groups against whom these algorithms are biased, they disagree with us on the point of how to deal with these biases. They argue that medicine is ‘neutral’ and should be protected from undue political influences, which they see in our suggestions to overcome inequitable biases. Iannessi and colleagues’ latter argument ignores several decades worth of research findings that show that medicine, data, and technologies are deeply social and political [7,8,9,10] and that healthcare systems often fail to treat patients equally [11, 12]—even if the people within them are genuinely committed to do so. What we show to be the case for medical data and ML in radiology is currently also visible in connection with the Covid-19 pandemic. The pandemic hit those who are marginalised and have the least economic resources the hardest. This is due to housing and working conditions, existing health status, underinsurance [13, 14], but also inequities within the medical and healthcare system itself such as implicit bias among healthcare professionals [15] or triage protocols that are based on the principle of ‘save the most lives’, which prioritize people without underlying health conditions [16]. Ignoring the social and political dimension of medicine will likely exacerbate such existing inequities and is thus certainly not in the interest of patients.

In sum, with regard to bias in radiology data and algorithms, we do not claim, of course, that datasets should be ‘diversified’ for their own sake. Correcting for biases in datasets and technologies through better representation of disadvantaged groups has the goal to increase the well-being and health status for all patients. Neither do we suggest that for the sake of equity, we should accept that algorithms are less accurate or not validated, as the authors suggest. Instead we argue that the very notion of accuracy is one that has questions about inclusion and equity already folded into it. Aiming for healthcare equity is not about creating ‘politically correct’ algorithms, and biases are not inequitable on a symbolic level because they misrepresent certain social groups. Rather, some biases are inequitable because they contribute to worse health outcomes for those people, who are already disadvantaged; this means they create and exacerbate concrete health inequities. We do, however, not advocate for creating equity through reducing the quality of healthcare that ‘privileged’ patients currently receive. In fact, it would be inequitable to withhold high-quality healthcare from anyone, if we have the means to provide it but fail to do so [17]. Countering inequitable biases means attending to bias as a social and political problem instead of merely as a technological problem that can supposedly be fixed by more data and better computer models. What to do about problems identified, however, will in each case depend on the specific practical context in which data is collected and algorithms are developed or employed.

References

  1. Iannessi A, Beaumont H, Bertrand AS (2021) Letter to the editor: “Not all biases are bad: equitable and inequitable biases in machine learning and radiology.” Insights Imaging. https://doi.org/10.1186/s13244-021-01022-5

    Article  PubMed  PubMed Central  Google Scholar 

  2. Pot M, Kieusseyan N, Prainsack B (2021) Not all biases are bad: equitable and inequitable biases in machine learning and radiology. Insights Imaging 12(1):13

    Article  Google Scholar 

  3. Green S, Svendsen MN (2021) Digital phenotyping and digital inheritance. Big Data Soc. https://doi.org/10.1177/20539517211036799

  4. Vegter MW, Zwart HAE, van Gool AJ (2021) The funhouse mirror: the I in personalised healthcare. Life Sci Soc Policy. https://doi.org/10.1186/s40504-020-00108-0

    Article  PubMed  PubMed Central  Google Scholar 

  5. Ponce NA (2020) Centering health equity in population health surveys. JAMA Health Forum 1(12):e201429–e201429

    Article  Google Scholar 

  6. Schrager SM, Steiner RJ, Bouris AM, Macapagal K, Brown CH (2019) Methodological considerations for advancing research on the health and wellbeing of sexual and gender minority youth. LGBT Health 6(4):156–165

    Article  Google Scholar 

  7. Lupton D (2012) Medicine as culture: illness, disease and the body. Sage, London

    Google Scholar 

  8. Winner L (1980) Do artifacts have politics? Daedalus 109(1):121–136

    Google Scholar 

  9. Noble SU (2018) Algorithms of oppression. NYU Press, New York

    Book  Google Scholar 

  10. Eubanks V (2018) Automating inequality: how high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York

    Google Scholar 

  11. Matthew DB (2018) Just medicine: a cure for racial inequality in American health care. NYU Press, New York

    Google Scholar 

  12. Marcelin JR, Siraj DS, Victor R, Kotadia S, Maldonado YA (2019) The impact of unconscious bias in healthcare: how to recognize and mitigate it. J Infect Dis 220(suppl 2):S62-73

    Article  Google Scholar 

  13. Van Dorn A, Cooney RE, Sabin ML (2020) COVID-19 exacerbating inequalities in the US. Lancet 395(10232):1243

    Article  CAS  Google Scholar 

  14. Wang Z, Tang K (2020) Combating COVID-19: health equity matters. Nat Med 26(4):458–458

    Article  CAS  Google Scholar 

  15. Milam AJ, Furr-Holden D, Edwards-Johnson J et al (2020) Are clinicians contributing to excess African American COVID-19 deaths? Unbeknownst to them, they may be. Health Equity 4(1):139–141

    Article  CAS  Google Scholar 

  16. White DB, Lo B (2021) Mitigating inequities and saving lives with ICU triage during the COVID-19 pandemic. Am J Respir Crit Care Med 203(3):287–295

    Article  CAS  Google Scholar 

  17. Wester G (2018) When are health inequalities unfair? Public Health Ethics 11(3):346–355

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Mirjam Pot.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This is a Reply to the Letter to the Editor https://doi.org/10.1186/s13244-021-01022-5.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pot, M., Prainsack, B. Reply to Letter to the Editor on “Not all biases are bad: equitable and inequitable biases in machine learning and radiology”. Insights Imaging 12, 157 (2021). https://doi.org/10.1186/s13244-021-01088-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13244-021-01088-1

Keywords