Transparency of machine-learning in healthcare: The GDPR & European health law

Research output: Contribution to journalJournal articleResearchpeer-review

Documents

Machine-learning (‘ML’) models are powerful tools which can support personalised clinical judgments, as well as patients’ choices about their healthcare. Concern has been raised, however, as to their ‘black box’ nature, in which calculations are so complex they are difficult to understand and independently verify. In considering the use of ML in healthcare, we divide the question of transparency into three different scenarios:
1)
Solely automated decisions. We suggest these will be unusual in healthcare, as Article 22(4) of the General Data Protection Regulation presents a high bar. However, if solely automatic decisions are made (e.g. for inpatient triage), data subjects will have a right to ‘meaningful information’ about the logic involved.

2)
Clinical decisions. These are decisions made ultimately by clinicians—such as diagnosis—and the standard of transparency under the GDPR is lower due to this human mediation.

3)
Patient decisions. Decisions about treatment are ultimately taken by the patient or their representative, albeit in dialogue with clinicians. Here, the patient will require a personalised level of medical information, depending on the severity of the risk, and how much they wish to know.

In the final category of decisions made by patients, we suggest European healthcare law sets a more personalised standard of information requirement than the GDPR. Clinical information must be tailored to the individual patient according to their needs and priorities; there is no monolithic ‘explanation’ of risk under healthcare law. When giving advice based (even partly) on a ML model, clinicians must have a sufficient grasp of the medically-relevant factors involved in the model output to offer patients this personalised level of medical information. We use the UK, Ireland, Denmark, Norway and Sweden as examples of European health law jurisdictions which require this personalised transparency to support patients’ rights to make informed choices. This adds to the argument for post-hoc, rationale explanations of ML to support healthcare decisions in all three scenarios.
Original languageEnglish
JournalComputer Law & Security Review
Volume43
Number of pages14
ISSN0267-3649
DOIs
Publication statusPublished - 2021

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 284284520