Abstract
Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance scientific research on language modeling, such as understanding how training data impacts model capabilities and limitations. To facilitate scientific research on language model pretraining, we curate and release Dolma, a three-trillion-token English corpus, built from a diverse mixture of web content, scientific papers, code, public-domain books, social media, and encyclopedic materials. We extensively document Dolma, including its design principles, details about its construction, and a summary of its contents. We present analyses and experimental results on intermediate states of Dolma to share what we have learned about important data curation practices. Finally, we open-source our data curation toolkit to enable reproduction of our work as well as support further research in large-scale data curation.
| Original language | English |
|---|---|
| Title of host publication | Long Papers |
| Editors | Lun-Wei Ku, Andre F. T. Martins, Vivek Srikumar |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 15725-15788 |
| Number of pages | 64 |
| ISBN (Electronic) | 9798891760943 |
| DOIs | |
| State | Published - 2024 |
| Externally published | Yes |
| Event | 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Bangkok, Thailand Duration: 11 Aug 2024 → 16 Aug 2024 |
Publication series
| Name | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
|---|---|
| Volume | 1 |
| ISSN (Print) | 0736-587X |
Conference
| Conference | 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 |
|---|---|
| Country/Territory | Thailand |
| City | Bangkok |
| Period | 11/08/24 → 16/08/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.
Fingerprint
Dive into the research topics of 'Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver