Abstract
A major consideration in multilingual language modeling is how to best represent languages with diverse vocabularies and scripts. Although contemporary text encoding methods cover most of the world's writing systems, they exhibit bias towards the high-resource languages of the Global West. As a result, texts of underrepresented languages tend to be segmented into long sequences of linguistically meaningless units. To address the disparities, we introduce a new paradigm that encodes the same information with segments of consistent size across diverse languages. Our encoding convention (MYTE) is based on morphemes, as their inventories are more balanced across languages than characters, which are used in previous methods. We show that MYTE produces shorter encodings for all 99 analyzed languages, with the most notable improvements for non-European languages and non-Latin scripts. This, in turn, improves multilingual LM performance and diminishes the perplexity gap throughout diverse languages.
Original language | English |
---|---|
Title of host publication | Long Papers |
Editors | Lun-Wei Ku, Andre F. T. Martins, Vivek Srikumar |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 15059-15076 |
Number of pages | 18 |
ISBN (Electronic) | 9798891760943 |
State | Published - 2024 |
Externally published | Yes |
Event | 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Bangkok, Thailand Duration: 11 Aug 2024 → 16 Aug 2024 |
Publication series
Name | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
---|---|
Volume | 1 |
ISSN (Print) | 0736-587X |
Conference
Conference | 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 |
---|---|
Country/Territory | Thailand |
City | Bangkok |
Period | 11/08/24 → 16/08/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.