Abstract
Prompt-based methods have been used extensively across NLP to build zero- and few-shot label predictors. Many NLP tasks are naturally structured: that is, their outputs consist of multiple labels which constrain each other. Annotating data for such tasks can be cumbersome. Can the promise of the prompt-based paradigm be extended to such structured outputs? In this paper, we present a framework for constructing zero- and few-shot linguistic structure predictors. Our key insight is that we can use structural constraints—and combinatorial inference derived from them—to filter out inconsistent structures predicted by large language models. We instantiated this framework on two structured prediction tasks, and five datasets. Across all cases, our results show that enforcing consistency not only constructs structurally valid outputs, but also improves performance over the unconstrained variants.
Original language | English |
---|---|
Title of host publication | Long Papers |
Editors | Kevin Duh, Helena Gomez, Steven Bethard |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 112-130 |
Number of pages | 19 |
ISBN (Electronic) | 9798891761148 |
State | Published - 2024 |
Externally published | Yes |
Event | 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 - Hybrid, Mexico City, Mexico Duration: 16 Jun 2024 → 21 Jun 2024 |
Publication series
Name | Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 |
---|---|
Volume | 1 |
Conference
Conference | 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 |
---|---|
Country/Territory | Mexico |
City | Hybrid, Mexico City |
Period | 16/06/24 → 21/06/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.