Discrimination and racial bias in AI technology: A case study for the WHO

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

In a study published in Science in October 2019, researchers found significant racial bias in an algorithm used widely in the US health-care system to guide health decisions. The algorithm is based on cost (rather than illness) as a proxy for needs; however, the US health-care system spent less money on Black than on white patients with the same level of need. Thus, the algorithm incorrectly assumed that white patients were sicker than equally sick Black patients. The researchers estimated that the racial bias reduced the number of Black patients receiving extra care by more than half.

This case highlights the importance of awareness of biases in AI and mitigating them from the onset to prevent discrimination (based on e.g. race, gender, age or disability). Biases may be present not only in the algorithm but also, for example, in the data used to train the algorithm. Many other types of bias, such as contextual bias, should be considered. Stakeholders, particularly AI programmers, should apply “ethics by design” and mitigate biases at the outset in developing a new AI technology for health.
Original languageEnglish
Title of host publicationEthics and Governance of Artificial Intelligence for Health : WHO Guidance
Number of pages1
Place of PublicationGeneva
PublisherWorld Health Organization
Publication date29 Jun 2021
Pages54
ISBN (Print)978-92-4-002921-7
ISBN (Electronic)978-92-4-002920-0
Publication statusPublished - 29 Jun 2021

ID: 273291016