Should people have a right not to be subjected to AI profiling based on publicly available data? A comment on Ploug

Research output: Contribution to journalComment/debateResearchpeer-review

Documents

  • Full text

    Final published version, 551 KB, PDF document

Several studies have documented that when presented with data from social media platforms machine learning (ML) models can make accurate predictions about users, e.g., about whether they are likely to suffer health-related conditions such as depression, mental disorders, and risk of suicide. In a recent article, Ploug (Philos Technol 36:14, 2023) defends a right not to be subjected to AI profiling based on publicly available data. In this comment, I raise some questions in relation to Ploug’s argument that I think deserves further discussion.

Original languageEnglish
Article number38
JournalPhilosophy and Technology
Volume36
Number of pages5
ISSN2210-5433
DOIs
Publication statusPublished - 2023

Bibliographical note

Publisher Copyright:
© 2023, The Author(s).

    Research areas

  • AI profiling, Privacy, Public data, Rights

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 352969905