BEND: Benchmarking DNA Language Models on Biologically Meaningful Tasks
Publikation: Konferencebidrag › Paper › Forskning
Dokumenter
- Fulltext
Accepteret manuskript, 1,41 MB, PDF-dokument
The genome sequence contains the blueprint for governing cellular processes. While the availability of genomes has vastly increased over the last decades, experimental annotation of the various functional, non-coding and regulatory elements encoded in the DNA sequence remains both expensive and challenging. This has sparked interest in unsupervised language modeling of genomic DNA, a paradigm that has seen great success for protein sequence data. Although various DNA language models have been proposed, evaluation tasks often differ between individual works, and might not fully recapitulate the fundamental challenges of genome annotation, including the length, scale and sparsity of the data. In this study, we introduce BEND, a Benchmark for DNA language models, featuring a collection of realistic and biologically meaningful downstream tasks defined on the human genome. We find that embeddings from current DNA LMs can approach performance of expert methods on some tasks, but only capture limited information about long-range features. BEND is available at https://github.com/frederikkemarin/BEND.
Originalsprog | Engelsk |
---|---|
Publikationsdato | 2024 |
Antal sider | 36 |
Status | Udgivet - 2024 |
Begivenhed | 12th International Conference on Learning Representations, ICLR 2024 - Hybrid, Vienna, Østrig Varighed: 7 maj 2024 → 11 maj 2024 |
Konference
Konference | 12th International Conference on Learning Representations, ICLR 2024 |
---|---|
Land | Østrig |
By | Hybrid, Vienna |
Periode | 07/05/2024 → 11/05/2024 |
Bibliografisk note
Funding Information:
This work was funded in part by Innovation Fund Denmark (0153-00197B), the Novo Nordisk Foundation through the MLLS Center (Basic Machine Learning Research in Life Science, NNF20OC0062606), the Pioneer Centre for AI (DRNF grant number P1), and the Danish Data Science Academy, which is funded by the Novo Nordisk Foundation (NNF21SA0069429) and VIL-LUM FONDEN (40516). This work was supported by the Helmholtz Association under the joint research school\u201DMunich School for Data Science (MUDS)\u201D. We would like to acknowledge and thank Ziga Avsec, David Kelley, as well as the rest of the authors behind the Enformer model Avsec et al. (2021), for providing the set of transcription start sites used in the enhancer annotation task.
Publisher Copyright:
© 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.
Links
ID: 403268570