Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Documents
- Fulltext
Final published version, 329 KB, PDF document
Large multilingual pretrained language models such as mBERT and XLM-RoBERTa have been found to be surprisingly effective for cross-lingual transfer of syntactic parsing models (Wu and Dredze, 2019), but only between related languages. However, source and training languages are rarely related, when parsing truly low-resource languages. To close this gap, we adopt a method from multi-task learning, which relies on automated curriculum learning, to dynamically optimize for parsing performance on outlier languages. We show that this approach is significantly better than uniform and size-proportional sampling in the zero-shot setting.
Original language | English |
---|---|
Title of host publication | ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers) |
Editors | Smaranda Muresan, Preslav Nakov, Aline Villavicencio |
Publisher | Association for Computational Linguistics (ACL) |
Publication date | 2022 |
Pages | 578-587 |
ISBN (Electronic) | 9781955917223 |
DOIs | |
Publication status | Published - 2022 |
Event | 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - Dublin, Ireland Duration: 22 May 2022 → 27 May 2022 |
Conference
Conference | 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 |
---|---|
Land | Ireland |
By | Dublin |
Periode | 22/05/2022 → 27/05/2022 |
Sponsor | Amazon Science, Bloomberg Engineering, et al., Google Research, Liveperson, Meta |
Bibliographical note
Publisher Copyright:
© 2022 Association for Computational Linguistics.
ID: 341490429