Towards a Methodology Supporting Semiautomatic Annotation of Head Movements in Video-recorded Conversations

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

We present a method to support the annotation of head movements in video-recorded conversations. Head movement segments from annotated multimodal data are used to train a model to detect head movements in unseen data. The resulting predicted movement sequences are uploaded to the ANVIL tool for post-annotation editing. The automatically identified head movements and the original annotations are compared to assess the overlap between the two. This analysis showed that movement onsets were more easily detected than offsets, and pointed at a number of patterns in the mismatches between original annotations and model predictions that could be dealt with in general terms in post-annotation guidelines.
Original languageEnglish
Title of host publicationProceedings of The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop
PublisherAssociation for Computational Linguistics
Publication date2021
Pages151-159
Publication statusPublished - 2021

Links

ID: 284176309