Back to the Future: Sequential Alignment of Text Representations

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Back to the Future : Sequential Alignment of Text Representations. / Bjerva, Johannes; Kouw, Wouter M. ; Augenstein, Isabelle.

Proceedings of the 34th AAAI Conference on Artificial Intelligence. AAAI Press, 2020.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Bjerva, J, Kouw, WM & Augenstein, I 2020, Back to the Future: Sequential Alignment of Text Representations. in Proceedings of the 34th AAAI Conference on Artificial Intelligence. AAAI Press, 34th AAAI Conference on Artificial Intelligence, New York, United States, 07/02/2020.

APA

Bjerva, J., Kouw, W. M., & Augenstein, I. (2020). Back to the Future: Sequential Alignment of Text Representations. In Proceedings of the 34th AAAI Conference on Artificial Intelligence AAAI Press.

Vancouver

Bjerva J, Kouw WM, Augenstein I. Back to the Future: Sequential Alignment of Text Representations. In Proceedings of the 34th AAAI Conference on Artificial Intelligence. AAAI Press. 2020

Author

Bjerva, Johannes ; Kouw, Wouter M. ; Augenstein, Isabelle. / Back to the Future : Sequential Alignment of Text Representations. Proceedings of the 34th AAAI Conference on Artificial Intelligence. AAAI Press, 2020.

Bibtex

@inproceedings{66bcf25eaed64ab6bbed24d0eb9ef515,
title = "Back to the Future: Sequential Alignment of Text Representations",
abstract = "Language evolves over time in many ways relevant to natural language processing tasks. For example, recent occurrences of tokens 'BERT' and 'ELMO' in publications refer to neural network architectures rather than persons. This type of temporal signal is typically overlooked, but is important if one aims to deploy a machine learning model over an extended period of time. In particular, language evolution causes data drift between time-steps in sequential decision-making tasks. Examples of such tasks include prediction of paper acceptance for yearly conferences (regular intervals) or author stance prediction for rumours on Twitter (irregular intervals). Inspired by successes in computer vision, we tackle data drift by sequentially aligning learned representations. We evaluate on three challenging tasks varying in terms of time-scales, linguistic units, and domains. These tasks show our method outperforming several strong baselines, including using all available data. We argue that, due to its low computational expense, sequential alignment is a practical solution to dealing with language evolution. ",
author = "Johannes Bjerva and Kouw, {Wouter M.} and Isabelle Augenstein",
year = "2020",
language = "English",
booktitle = "Proceedings of the 34th AAAI Conference on Artificial Intelligence",
publisher = "AAAI Press",
note = "34th AAAI Conference on Artificial Intelligence, AAAI-20 ; Conference date: 07-02-2020 Through 12-01-2021",

}

RIS

TY - GEN

T1 - Back to the Future

T2 - 34th AAAI Conference on Artificial Intelligence

AU - Bjerva, Johannes

AU - Kouw, Wouter M.

AU - Augenstein, Isabelle

PY - 2020

Y1 - 2020

N2 - Language evolves over time in many ways relevant to natural language processing tasks. For example, recent occurrences of tokens 'BERT' and 'ELMO' in publications refer to neural network architectures rather than persons. This type of temporal signal is typically overlooked, but is important if one aims to deploy a machine learning model over an extended period of time. In particular, language evolution causes data drift between time-steps in sequential decision-making tasks. Examples of such tasks include prediction of paper acceptance for yearly conferences (regular intervals) or author stance prediction for rumours on Twitter (irregular intervals). Inspired by successes in computer vision, we tackle data drift by sequentially aligning learned representations. We evaluate on three challenging tasks varying in terms of time-scales, linguistic units, and domains. These tasks show our method outperforming several strong baselines, including using all available data. We argue that, due to its low computational expense, sequential alignment is a practical solution to dealing with language evolution.

AB - Language evolves over time in many ways relevant to natural language processing tasks. For example, recent occurrences of tokens 'BERT' and 'ELMO' in publications refer to neural network architectures rather than persons. This type of temporal signal is typically overlooked, but is important if one aims to deploy a machine learning model over an extended period of time. In particular, language evolution causes data drift between time-steps in sequential decision-making tasks. Examples of such tasks include prediction of paper acceptance for yearly conferences (regular intervals) or author stance prediction for rumours on Twitter (irregular intervals). Inspired by successes in computer vision, we tackle data drift by sequentially aligning learned representations. We evaluate on three challenging tasks varying in terms of time-scales, linguistic units, and domains. These tasks show our method outperforming several strong baselines, including using all available data. We argue that, due to its low computational expense, sequential alignment is a practical solution to dealing with language evolution.

M3 - Article in proceedings

BT - Proceedings of the 34th AAAI Conference on Artificial Intelligence

PB - AAAI Press

Y2 - 7 February 2020 through 12 January 2021

ER -

ID: 255053434