Big Data and Multimodal Communication: A Perspective View

Research output: Contribution to journalJournal articleResearchpeer-review

Costanza Navarretta, Lucretia Oemig

Humans communicate face-to-face through at least two modalities, the auditive modality, speech, and the visual modality, gestures, which comprise e.g. gaze movements, facial expressions, head movements, and hand gestures. The relation between speech and gesture is complex and partly depends on factors such as the culture, the communicative situation, the interlocutors and their relation. Investigating these factors in real data is vital for studying multimodal communication and building models for implementing natural multimodal communicative interfaces able to interact naturally with individuals of different age, culture, and needs. In this paper, we discuss to what extent big data “in the wild”, which are growing explosively on the internet, are useful for this purpose also in light of legal aspects about the use of personal data, comprising multimodal data downloaded from social media.
Original languageEnglish
JournalIntelligent Systems Reference Library
Volume159
Pages (from-to)167-184
ISSN1868-4394
Publication statusPublished - 2019

ID: 223923919