Abstract
The escalating debate on AI's capabilities warrants developing reliable metrics to assess machine “intelligence.” Recently, many anecdotal examples were used to suggest that newer large language models (LLMs) like ChatGPT and GPT-4 exhibit Neural Theory-of-Mind (N-ToM); however, prior work reached conflicting conclusions regarding those abilities. We investigate the extent of LLMs' N-ToM through an extensive evaluation of 6 tasks and find that while LLMs exhibit certain N-ToM abilities, this behavior is far from being robust. We further examine the factors impacting performance on N-ToM tasks and discover that LLMs struggle with adversarial examples, indicating reliance on shallow heuristics rather than robust ToM abilities. We caution against drawing conclusions from anecdotal examples, limited benchmark testing, and using human-designed psychological tests to evaluate models.
Original language | English |
---|---|
Title of host publication | EACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference |
Editors | Yvette Graham, Matthew Purver, Matthew Purver |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 2257-2273 |
Number of pages | 17 |
ISBN (Electronic) | 9798891760882 |
State | Published - 2024 |
Event | 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024 - St. Julian�s, Malta Duration: 17 Mar 2024 → 22 Mar 2024 |
Publication series
Name | EACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference |
---|---|
Volume | 1 |
Conference
Conference | 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024 |
---|---|
Country/Territory | Malta |
City | St. Julian�s |
Period | 17/03/24 → 22/03/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.