Abstract
Despite their wide adoption, the biases and unintended behaviors of language models remain poorly understood. In this paper, we identify and characterize a phenomenon never discussed before, which we call semantic leakage, where models leak irrelevant information from the prompt into the generation in unexpected ways. We propose an evaluation setting to detect semantic leakage both by humans and automatically, curate a diverse test suite for diagnosing this behavior, and measure significant semantic leakage in 13 flagship models. We also show that models exhibit semantic leakage in languages besides English and across different settings and generation scenarios. This discovery highlights yet another type of bias in language models that affects their generation patterns and behaviour.
| Original language | English |
|---|---|
| Title of host publication | Long Papers |
| Editors | Luis Chiruzzo, Alan Ritter, Lu Wang |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 785-798 |
| Number of pages | 14 |
| ISBN (Electronic) | 9798891761896 |
| DOIs | |
| State | Published - 2025 |
| Externally published | Yes |
| Event | 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2025 - Hybrid, Albuquerque, United States Duration: 29 Apr 2025 → 4 May 2025 |
Publication series
| Name | Proceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies: Long Papers, NAACL-HLT 2025 |
|---|---|
| Volume | 1 |
Conference
| Conference | 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2025 |
|---|---|
| Country/Territory | United States |
| City | Hybrid, Albuquerque |
| Period | 29/04/25 → 4/05/25 |
Bibliographical note
Publisher Copyright:© 2025 Association for Computational Linguistics.