Speech and Gestures:Toward their inclusion in CogInfoCom Systems
Activity: Talk or presentation types › Lecture and oral contribution
Costanza Navarretta - Invited speaker
- Department of Nordic Studies and Linguistics
Face-to-face communication is multimodal involving at least two modalities, the auditive (speech) and the visual (gestures). Speech and gestures are related temporally and semantically on many levels, and co-speech gestures, e.g. head movements, facial expressions, arm and hand gestures, are co-expressive but not redundant. Therefore, research on speech and gestures is not only central for the understanding of cognitive mechanisms behind communication, but also for the integration of human language in CogInfoCom systems and advanced human-centred ICT. In the keynote, I will present studies at the Centre for Language Technology addressing multimodal communication from a natural language processing point of view with focus on the annotation and automatic processing of multimodal corpora, in this context video- and audio-recorded monologues and dialogues. Examples of multimodal phenomena accounted for are feedback, reference to abstract and concrete objects, expression of the semantics of verbs, and role of speech pauses and gestures on audience response.
24 Sep 2020
|Title||IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFOCOMMUNICATIONS|
|Date||23/09/2020 → 25/09/2020|