The praxeology of bypassing ocular-centric spatial relations: How visually impaired people use AI mobile object recognition when shopping

Research output: Contribution to conferenceConference abstract for conferenceResearchpeer-review


This paper explores physical shopping conducted by visually impaired people using AI-technology in their smartphones. Based on computer vision and object recognition, smartphones can be used to scan objects and provide their user with verbal information about them. However, mobile scanning involves highly complex embodied actions. A perspicuous setting for studying the ordered complexity of object-scanning are when visually impaired people goes shopping, using their phones with object-recognition functionalities to scan grocery products. Whereas sighted people can adjust a handheld camera to an object using their vision, thus unnoticeably accomplishing a scanning, visually impaired people are observably orienting to the required actions for the accomplishment of a successful scan. Thus, studying visually impaired people enables us to establish new understandings about the spatial relation between the body, the object and the environment thus contributing to new insights into the use of new AI technology and the interactional and situational practices which are involved. The paper is based on data from the BlindTech project, an ongoing video ethnographic study of visually impaired people’s daily lives and usage of new technologies in Denmark. Data is analyzed using ethnomethodological multimodal conversation analysis (Streeck et al., 2011). The analysis provides evidence for what we suggest to call the praxeology of bypassing ocular-centric spatial relations, a study of how blind people navigate in a visual dominant world. Cognitive aspects of spatial relations have been described extensively in neuropsychological research (Postma & Ham, 2016). However, spatial relations, i.e. the relation between sensory systems and objects in space, are firstly direct, non-representational and action-based (Gibson, 2002; Briscoe & Grush, 2020). We show how establishing an object-space relation is an observable situated achievement. Findings in the paper relates to embodied actions: a) holding the phone correctly in the hand, b) finding the correct angle of the camera, c) finding the correct distance to the object, d) holding the object in the correct angle and position and e) understanding the intrinsic nature of the object. We show just how visually impaired people accomplish these in-action as locally produced phenomena of order.
Original languageEnglish
Publication date2021
Publication statusPublished - 2021
EventDigitalizing Social Practices: Changes and Consequences - SDU / Online , Odense / Online, Denmark
Duration: 23 Feb 202124 Feb 2021


ConferenceDigitalizing Social Practices: Changes and Consequences
LocationSDU / Online
CityOdense / Online
Internet address

Number of downloads are based on statistics from Google Scholar and

No data available

ID: 257300924