Abstract
Most work on modeling the conversation history in Conversational Question Answering (CQA) reports a single main result on a common CQA benchmark. While existing models show impressive results on CQA leaderboards, it remains unclear whether they are robust to shifts in setting (sometimes to more realistic ones), training data size (e.g., from large to small sets) and domain. In this work, we design and conduct the first large-scale robustness study of history modeling approaches for CQA. We find that high benchmark scores do not necessarily translate to strong robustness, and that various methods can perform extremely differently under different settings. Equipped with the insights from our study, we design a novel prompt-based history modeling approach and demonstrate its strong robustness across various settings. Our approach is inspired by existing methods that highlight historic answers in the passage. However, instead of highlighting bymodifying the passage token embeddings, we add textual prompts directly in the passage text. Our approach is simple, easy to plug into practically any model, and highly effective, thus we recommend it as a starting point for future model developers. We also hope that our study and insights will raise awareness to the importance of robustness-focused evaluation, in addition to obtaining high leaderboard scores, leading to better CQA systems.1.
Original language | English |
---|---|
Pages (from-to) | 351-366 |
Number of pages | 16 |
Journal | Transactions of the Association for Computational Linguistics |
Volume | 11 |
DOIs | |
State | Published - 20 Apr 2023 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2023, MIT Press Journals. All rights reserved.
Funding
We would like to thank the action editor and the reviewers, as well as the members of the IE@Technion NLP group and Roee Aharoni for their valuable feedback and advice. The Technion team was supported by the Zuckerman Fund to the Technion Artificial Intelligence Hub (Tech.AI). This research was also supported in part by a grant from Google.
Funders | Funder number |
---|---|
Technion Artificial Intelligence Hub | Tech.AI |