Understanding the Effect of Textual Adversaries in Multimodal Machine Translation
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Understanding the Effect of Textual Adversaries in Multimodal Machine Translation. / Dutta Chowdhury, Koel; Elliott, Desmond.
Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN). Hong Kong, China : Association for Computational Linguistics, 2019. p. 35-40.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
inTEgrating Real-world kNowledge , Hong Kong, 03/11/2019. https://doi.org/10.18653/v1/D19-6406
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Understanding the Effect of Textual Adversaries in Multimodal Machine Translation
AU - Dutta Chowdhury, Koel
AU - Elliott, Desmond
PY - 2019/11/1
Y1 - 2019/11/1
N2 - It is assumed that multimodal machine translation systems are better than text-only systems at translating phrases that have a direct correspondence in the image. This assumption has been challenged in experiments demonstrating that state-of-the-art multimodal systems perform equally well in the presence of randomly selected images, but, more recently, it has been shown that masking entities from the source language sentence during training can help to overcome this problem. In this paper, we conduct experiments with both visual and textual adversaries in order to understand the role of incorrect textual inputs to such systems. Our results show that when the source language sentence contains mistakes, multimodal translation systems do not leverage the additional visual signal to produce the correct translation. We also find that the degradation of translation performance caused by textual adversaries is significantly higher than by visual adversaries.
AB - It is assumed that multimodal machine translation systems are better than text-only systems at translating phrases that have a direct correspondence in the image. This assumption has been challenged in experiments demonstrating that state-of-the-art multimodal systems perform equally well in the presence of randomly selected images, but, more recently, it has been shown that masking entities from the source language sentence during training can help to overcome this problem. In this paper, we conduct experiments with both visual and textual adversaries in order to understand the role of incorrect textual inputs to such systems. Our results show that when the source language sentence contains mistakes, multimodal translation systems do not leverage the additional visual signal to produce the correct translation. We also find that the degradation of translation performance caused by textual adversaries is significantly higher than by visual adversaries.
U2 - 10.18653/v1/D19-6406
DO - 10.18653/v1/D19-6406
M3 - Article in proceedings
SP - 35
EP - 40
BT - Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN)
PB - Association for Computational Linguistics
CY - Hong Kong, China
T2 - First Workshop Beyond Vision and LANguage:<br/>inTEgrating Real-world kNowledge
Y2 - 3 November 2019
ER -
ID: 230850047