The Unbearable Likeness of Being: The Challenge of Artificial Intelligence towards the Individualist Orientation of International Human Rights Law

Research output: Contribution to conferenceConference abstract for conferenceResearchpeer-review

The increasing use of emerging technologies such as artificial intelligence have raised discrete human rights issues such as the right to privacy, non-discrimination, freedom of expression and data protection. Less explored however are the ways in which artificial intelligence/machine learning (‘AI/ML’) systems are challenging the very core conceptions that sustain the edifice of the contemporary human rights framework. Policy makers and human rights practice proceed from the sufficiency of the human rights regime to tackle such new challenges, operating within a ‘normative equivalency’ paradigm that claims to be able to accommodate the novelty of modalities and harms brought forth by emerging technologies such as AI/ML. However, this paper highlights that one key element of the human rights conceptual framework, its individualist orientation of rights protection, is being challenged.

To set the stage, the paper adopts a socially situated understanding of human rights – acknowledging the socially embedded nature of individuals within societies. The paper argues that social embeddedness was itself an implicit assumption within the rights found in the 1948 Universal Declaration of Human Rights. This conveys individuals in relations with others in communities, situated within organized societies with corresponding political and governance institutions positioned to protect these sets of rights. The ubiquity of emerging technologies such as AI/ML systems appears against this backdrop whereby individuals are increasingly being read, modulated and constructed by such systems. In turn, AI/ML systems promise faster, better, cheaper optimisations of goals and performance metrics across a variety of diverse sectors.

This paper has two objectives. First, it claims that AI/ML are challenging the individualist orientation of the human rights framework through a process of the structural atomization of individuals in ways that are fundamentally misaligned with international human rights law. First, data points that group, infer and construct individuals through her likeness instrumentally atomizes individuals as means for ends in ways that are not of her own choosing, through AI/ML systems wherein the situated individual has little or no say. The algorithmic modulations of AI/ML systems can be opaque, hidden and unknown. Atomization of individuals also affect the experiential comparators necessary to claim a deviation of rights standards.

Second, individuals risk instrumentalization through optimization, wherein the efficiency-driven framing of AI/ML tends to encourage problem solving in ways that respond to computational tractability. Thus, it is not the case that individuals are instrumentalized by AI/ML systems, it is that they cannot help but be instrumentalized when the objective to begained is one of optimisation. Representation about social and physical phenomena is necessarily flattened and the messiness of social and moral contestation is replaced with questions of fair data representation and fairness of AI/ML, compacting incommensurable values into computational optimisations. Rights are however, not (straightforwardly) about optimisations.

Third, the contextual atomization through AI/ML mediated shaping of epistemic and enabling conditions can threaten the condition antecedent for socially situated exercise of moral agency and with it, human rights. Such precarity exposes the inadequacy of human rights responses that focus upon harms through its exogenous (as perceivable and observable) typology instead of structural conditions as enablers of harm. Further, the harm typology accommodated within the human rights framework admits implicit directionality and is thus challenged by the multi-directionality of potential harms and the multi-stability of technologies such as AI/ML. However, even this account is inadequate. The process of mediation between emerging technologies such as AI is a process of co-creation – individuals can be intertwined as co-authors of resulting harms.

While the diagnosis of the problem space is the main focus of the paper, the second objective briefly addresses the possible ways to address the concerns raised. To this end, the paper briefly examines three exemplars of extensions of the solutions space and its normative justifications – through the extension of the individual, extension of rights, and finally extension of the governance space. The paper concludes that an extension of the governance space is best placed to address this challenge.
Original languageEnglish
Publication date2022
Number of pages2
Publication statusPublished - 2022
EventComputational Law on Edge - Brussels, Brussels, Belgium
Duration: 3 Nov 20224 Nov 2022


ConferenceComputational Law on Edge
Internet address

ID: 338056187