Cognitive Effects in Large Language Models

Jonathan Shaki, Sarit Kraus, Michael Wooldridge

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Large Language Models (LLMs) such as ChatGPT have received enormous attention over the past year and are now used by hundreds of millions of people every day. The rapid adoption of this technology naturally raises questions about the possible biases such models might exhibit. In this work, we tested one of these models (GPT-3) on a range of cognitive effects, which are systematic patterns that are usually found in human cognitive tasks. We found that LLMs are indeed prone to several human cognitive effects. Specifically, we show that the priming, distance, SNARC, and size congruity effects were presented with GPT-3, while the anchoring effect is absent. We describe our methodology, and specifically the way we converted real-world experiments to text-based experiments. Finally, we speculate on the possible reasons why GPT-3 exhibits these effects and discuss whether they are imitated or reinvented.

Original languageEnglish
Title of host publicationECAI 2023 - 26th European Conference on Artificial Intelligence, including 12th Conference on Prestigious Applications of Intelligent Systems, PAIS 2023 - Proceedings
EditorsKobi Gal, Kobi Gal, Ann Nowe, Grzegorz J. Nalepa, Roy Fairstein, Roxana Radulescu
PublisherIOS Press BV
Number of pages8
ISBN (Electronic)9781643684369
StatePublished - 28 Sep 2023
Event26th European Conference on Artificial Intelligence, ECAI 2023 - Krakow, Poland
Duration: 30 Sep 20234 Oct 2023

Publication series

NameFrontiers in Artificial Intelligence and Applications
ISSN (Print)0922-6389
ISSN (Electronic)1879-8314


Conference26th European Conference on Artificial Intelligence, ECAI 2023

Bibliographical note

Publisher Copyright:
© 2023 The Authors.


We would like to thank Samuel Shaki for his useful insights from cognitive psychology. The research was supported in part by the EU Project TAILOR under grant 952215.

FundersFunder number
European Commission952215


    Dive into the research topics of 'Cognitive Effects in Large Language Models'. Together they form a unique fingerprint.

    Cite this