Learning Language-Independent Representations of Verbs and Adjectives from Multimodal Retrieval

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

This paper presents a simple modification to previous work on learning cross-lingual, grounded word representations from image-word pairs that, unlike previous work, is robust across different parts of speech, e.g., able to find the translation of the adjective 'social' relying only on image features associated with its translation candidates. Our method does not rely on black-box image search engines or any direct cross-lingual supervision. We evaluate our approach on English-German and English-Japanese word alignment, as well as on existing English-German bilingual dictionary induction datasets.

Original languageEnglish
Title of host publicationProceedings - 14th International Conference on Signal-Image Technology and Internet Based Systems, SITIS
Number of pages8
PublisherIEEE
Publication date2019
Pages427-434
ISBN (Electronic)978-1-5386-9385-8
DOIs
Publication statusPublished - 2019
Event14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018 - Las Palmas de Gran Canaria, Spain
Duration: 26 Nov 201829 Nov 2018

Conference

Conference14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018
LandSpain
ByLas Palmas de Gran Canaria
Periode26/11/201829/11/2018

    Research areas

  • Computer vision, Cross-lingual learning, Distributional semantics, Multi-modal retrieval, Natural language processing

ID: 223253166