Abstract
Language models can be prompted to perform a wide variety of tasks with zero- and few-shot in-context learning. However, performance varies significantly with the choice of prompt, and we do not yet understand why this happens. In this paper, we analyze the factors that contribute to this variance and establish a new empirical hypothesis: the performance of a prompt is predicted by the extent to which the model is familiar with the language it contains. Over a wide range of tasks, we show that the lower the perplexity of the prompt, the better it is able to perform the task, when considering reasonable prompts that are related to it. As part of our analysis, we also devise a method to automatically extend a small seed set of manually written prompts by paraphrasing with GPT3 and backtranslation. This larger set allows us to verify that perplexity is a strong predictor of the success of a prompt and we show that the lowest perplexity prompts are consistently effective.
| Original language | English |
|---|---|
| Title of host publication | Findings of the Association for Computational Linguistics |
| Subtitle of host publication | EMNLP 2023 |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 10136-10148 |
| Number of pages | 13 |
| ISBN (Electronic) | 9798891760615 |
| DOIs | |
| State | Published - 2023 |
| Externally published | Yes |
| Event | 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Hybrid, Singapore Duration: 6 Dec 2023 → 10 Dec 2023 |
Publication series
| Name | Findings of the Association for Computational Linguistics: EMNLP 2023 |
|---|
Conference
| Conference | 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 |
|---|---|
| Country/Territory | Singapore |
| City | Hybrid |
| Period | 6/12/23 → 10/12/23 |
Bibliographical note
Publisher Copyright:© 2023 Association for Computational Linguistics.
Funding
We thank Alisa Liu and Orevaoghene Ahia for their help in annotating noisy prompts. We also thank the reviewers for their valuable comments on the paper.