Abstract
Despite recent advancements in vision-language models, their performance remains suboptimal on images from non-western cultures, due to underrepresentation in training datasets. Various benchmarks have been proposed to test models' cultural inclusivity, but they have limited coverage of cultures and do not adequately assess cultural diversity across universal as well as culture-specific local concepts. To address these limitations, we introduce the GLOBALRG benchmark, comprising two challenging tasks: retrieval across universals and cultural visual grounding. The former task entails retrieving culturally-diverse images for universal concepts from 50 countries, while the latter aims at grounding culture-specific concepts within images from 15 countries. Our evaluation across a wide range of models reveals that the performance varies significantly across cultures - underscoring the necessity for enhancing multicultural understanding in vision-language models. Our data and code can be found at https://globalrg.github.io/.
| Original language | English |
|---|---|
| Title of host publication | EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference |
| Editors | Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 6763-6782 |
| Number of pages | 20 |
| ISBN (Electronic) | 9798891761643 |
| DOIs | |
| State | Published - 2024 |
| Externally published | Yes |
| Event | 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024 - Hybrid, Miami, United States Duration: 12 Nov 2024 → 16 Nov 2024 |
Publication series
| Name | EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference |
|---|
Conference
| Conference | 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024 |
|---|---|
| Country/Territory | United States |
| City | Hybrid, Miami |
| Period | 12/11/24 → 16/11/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.