Parameter sharing between dependency parsers for related languages

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

Previous work has suggested that parameter sharing between transition-based neural dependency parsers for related languages can lead to better performance, but there is no consensus on what parameters to share. We present an evaluation of 27 different parameter sharing strategies across 10 languages, representing five pairs of related languages, each pair from a different language family. We find that sharing transition classifier parameters always helps, whereas the usefulness of sharing word and/or character LSTM parameters varies. Based on this result, we propose an architecture where the transition classifier is shared, and the sharing of word and character parameters is controlled by a parameter that can be tuned on validation data. This model is linguistically motivated and obtains significant improvements over a mono-lingually trained baseline. We also find that sharing transition classifier parameters helps when training a parser on unrelated language pairs, but we find that, in the case of unrelated languages, sharing too many parameters does not help.
Original languageEnglish
Title of host publicationProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
PublisherAssociation for Computational Linguistics
Publication date2020
Pages4992-4997
Publication statusPublished - 2020
Event2018 Conference on Empirical Methods in Natural Language Processing - Brussels, Belgium
Duration: 31 Oct 20184 Nov 2018

Conference

Conference2018 Conference on Empirical Methods in Natural Language Processing
LandBelgium
ByBrussels
Periode31/10/201804/11/2018

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 214507219