Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Processing Long Legal Documents with Pre-trained Transformers : Modding LegalBERT and Longformer. / Mamakas, Dimitris; Tsotsi, Petros; Androutsopoulos, Ion; Chalkidis, Ilias.
NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop. Association for Computational Linguistics (ACL), 2022. p. 130-142.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Processing Long Legal Documents with Pre-trained Transformers
T2 - 4th Natural Legal Language Processing Workshop, NLLP 2022, co-located with the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022
AU - Mamakas, Dimitris
AU - Tsotsi, Petros
AU - Androutsopoulos, Ion
AU - Chalkidis, Ilias
N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.
AB - Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.
UR - http://www.scopus.com/inward/record.url?scp=85154613920&partnerID=8YFLogxK
M3 - Article in proceedings
AN - SCOPUS:85154613920
SP - 130
EP - 142
BT - NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop
PB - Association for Computational Linguistics (ACL)
Y2 - 8 December 2022
ER -
ID: 358725705