Abstract
Hebrew manuscripts provide thousands of textual transmissions of post-Biblical Hebrew texts. In many cases, the text in the manuscripts is not fully decipherable, whether due to deterioration, perforation, burns, or otherwise. Existing BERT models for Hebrew struggle to fill these gaps, due to the many orthographical deviations found in Hebrew manuscripts. We have pretrained a new dedicated BERT model, dubbed MsBERT (short for: Manuscript BERT), designed from the ground up to handle Hebrew manuscript text. MsBERT substantially outperforms all existing Hebrew BERT models regarding the prediction of missing words in fragmentary Hebrew manuscript transcriptions in multiple genres, as well as regarding the task of differentiating between quoted passages and exegetical elaborations. We provide MsBERT for free download and unrestricted use, and we also provide an interactive and user-friendly website to allow manuscript scholars to leverage the power of MsBERT in their scholarly work of reconstructing fragmentary Hebrew manuscripts.
Original language | English |
---|---|
Title of host publication | ML4AL 2024 - 1st Workshop on Machine Learning for Ancient Languages, Proceedings of the Workshop |
Editors | John Pavlopoulos, Thea Sommerschield, Yannis Assael, Shai Gordin, Kyunghyun Cho, Marco Passarotti, Rachele Sprugnoli, Yudong Liu, Bin Li, Adam Anderson |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 13-18 |
Number of pages | 6 |
ISBN (Electronic) | 9798891761445 |
State | Published - 2024 |
Event | 1st Workshop on Machine Learning for Ancient Languages, ML4AL 2024 - Hybrid, Bangkok, Thailand Duration: 15 Aug 2024 → … |
Publication series
Name | ML4AL 2024 - 1st Workshop on Machine Learning for Ancient Languages, Proceedings of the Workshop |
---|
Conference
Conference | 1st Workshop on Machine Learning for Ancient Languages, ML4AL 2024 |
---|---|
Country/Territory | Thailand |
City | Hybrid, Bangkok |
Period | 15/08/24 → … |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.