How Well Do Large Language Models Perform on Faux Pas Tests?

Natalie Shapira, Guy Zwirn, Yoav Goldberg

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

21 Scopus citations

Abstract

Motivated by the question of the extent to which large language models “understand” social intelligence, we investigate the ability of such models to generate correct responses to questions involving descriptions of faux pas situations. The faux pas test is a test used in clinical psychology, which is known to be more challenging for children than individual tests of theory-of-mind or social intelligence. Our results demonstrate that, while the models seem to sometimes offer correct responses, they in fact struggle with this task, and that many of the seemingly correct responses can be attributed to over-interpretation by the human reader (“the ELIZA effect”). An additional phenomenon observed is the failure of most models to generate a correct response to presupposition questions. Finally, in an experiment in which the models are tasked with generating original faux pas stories, we find that while some models are capable of generating novel faux pas stories, the stories are all explicit, as the models are limited in their abilities to describe situations in an implicit manner.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics, ACL 2023
PublisherAssociation for Computational Linguistics (ACL)
Pages10438-10451
Number of pages14
ISBN (Electronic)9781959429623
DOIs
StatePublished - 2023
EventFindings of the Association for Computational Linguistics, ACL 2023 - Toronto, Canada
Duration: 9 Jul 202314 Jul 2023

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Conference

ConferenceFindings of the Association for Computational Linguistics, ACL 2023
Country/TerritoryCanada
CityToronto
Period9/07/2314/07/23

Bibliographical note

Publisher Copyright:
© 2023 Association for Computational Linguistics.

Funding

We would like to thank Vered Shwartz, Ori Shapira, Osnat Baron Singer, Tamar Nissenbaum Putter, Maya Sabag, Arie Cattan, Uri Katz, Mosh Levy, Aya Soffer, David Konopnicki, and IBM-Research staff members for helpful discussions and contributions, each in their own way. We thank the anonymous reviewers for their insightful comments and suggestions. This project was partially funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program, grant agreement No. 802774 (iEXTRACT); and by the Computer Science Department of Bar-Ilan University. We would like to thank Vered Shwartz, Ori Shapira, Osnat Baron Singer, Tamar Nissenbaum Putter, Maya Sabag, Arie Cattan, Uri Katz, Mosh Levy, Aya Soffer, David Konopnicki, and IBM-Research staff members for helpful discussions and contributions, each in their own way. We thank the anonymous reviewers for their insightful comments and suggestions. This project was partially funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program, grant agreement No. 802774 (iEXTRACT); and by the Computer Science Department of Bar-Ilan University.

FundersFunder number
Computer Science department of Bar-Ilan University
Horizon 2020 Framework Programme
European Commission
Horizon 2020802774

    Fingerprint

    Dive into the research topics of 'How Well Do Large Language Models Perform on Faux Pas Tests?'. Together they form a unique fingerprint.

    Cite this