Abstract
Sign language segmentation is a crucial task in sign language processing systems. It enables downstream tasks such as sign recognition, transcription, and machine translation. In this work, we consider two kinds of segmentation: segmentation into individual signs and segmentation into phrases, larger units comprising several signs. We propose a novel approach to jointly model these two tasks. Our method is motivated by linguistic cues observed in sign language corpora. We replace the predominant IO tagging scheme with BIO tagging to account for continuous signing. Given that prosody plays a significant role in phrase boundaries, we explore the use of optical flow features. We also provide an extensive analysis of hand shapes and 3D hand normalization. We find that introducing BIO tagging is necessary to model sign boundaries. Explicitly encoding prosody by optical flow improves segmentation in shallow models, but its contribution is negligible in deeper models. Careful tuning of the decoding algorithm atop the models further improves the segmentation quality. We demonstrate that our final models generalize to out-of-domain video content in a different signed language, even under a zero-shot setting. We observe that including optical flow and 3D hand normalization enhances the robustness of the model in this context.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics |
Subtitle of host publication | EMNLP 2023 |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 12703-12724 |
Number of pages | 22 |
ISBN (Electronic) | 9798891760615 |
State | Published - 2023 |
Event | 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, Singapore Duration: 6 Dec 2023 → 10 Dec 2023 |
Publication series
Name | Findings of the Association for Computational Linguistics: EMNLP 2023 |
---|
Conference
Conference | 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 6/12/23 → 10/12/23 |
Bibliographical note
Publisher Copyright:© 2023 Association for Computational Linguistics.
Funding
This work was funded by the EU Horizon 2020 project EASIER (grant agreement no. 101016982), the Swiss Innovation Agency (Innosuisse) flag-ship IICT (PFFS-21-47) and the EU Horizon 2020 project iEXTRACT (grant agreement no. 802774). We also thank Rico Sennrich and Chantal Amrhein for their suggestions.
Funders | Funder number |
---|---|
Horizon 2020 | 101016982 |
Innosuisse - Schweizerische Agentur für Innovationsförderung | 802774, PFFS-21-47 |