Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
Deny, dismiss and downplay : developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI. / Duke, Shaul A.
In: Ethics and Information Technology, Vol. 24, No. 1, 1, 03.2022.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Deny, dismiss and downplay
T2 - developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI
AU - Duke, Shaul A.
N1 - Funding Information: Author would like to thank David S. Jones, Joost van Loon, Klaus Hoeyer, Zeev Rosenhek, Amy Fairchild, Dani Filc, and the two anonymous reviewers for their helpful comments on earlier drafts of this article. Special thanks to Lauren Duke for her valuable insights and assistance. Publisher Copyright: © 2022, The Author(s), under exclusive licence to Springer Nature B.V.
PY - 2022/3
Y1 - 2022/3
N2 - Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers to reduce such risks. However, what these texts usually sidestep is the question of how aware developers are to the risks they are creating with these new AI technologies, and what their attitudes are towards such risks. This paper asks to rectify this gap in research, by analyzing an ongoing case study. Focusing on six Israeli AI startups in the field of radiology, I carry out a content analysis of their online material in order to examine these companies’ stances towards the potential threat their automated tools pose to patient safety and to the work-standing of healthcare professionals. Results show that these developers are aware of the risks their AI products pose, but tend to deny their own role in the technological transformation and dismiss or downplay the risks to stakeholders. I conclude by tying these findings back to current risk-reduction recommendations with regards to advanced AI technologies, and suggest which of them hold more promise in light of developers’ attitudes.
AB - Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers to reduce such risks. However, what these texts usually sidestep is the question of how aware developers are to the risks they are creating with these new AI technologies, and what their attitudes are towards such risks. This paper asks to rectify this gap in research, by analyzing an ongoing case study. Focusing on six Israeli AI startups in the field of radiology, I carry out a content analysis of their online material in order to examine these companies’ stances towards the potential threat their automated tools pose to patient safety and to the work-standing of healthcare professionals. Results show that these developers are aware of the risks their AI products pose, but tend to deny their own role in the technological transformation and dismiss or downplay the risks to stakeholders. I conclude by tying these findings back to current risk-reduction recommendations with regards to advanced AI technologies, and suggest which of them hold more promise in light of developers’ attitudes.
KW - Affected stakeholders
KW - Artificial intelligence
KW - Automation
KW - Developers
KW - Healthcare
KW - Risk
U2 - 10.1007/s10676-022-09627-0
DO - 10.1007/s10676-022-09627-0
M3 - Journal article
AN - SCOPUS:85124014204
VL - 24
JO - Ethics and Information Technology
JF - Ethics and Information Technology
SN - 1388-1957
IS - 1
M1 - 1
ER -
ID: 318110180