Human trust in artificial intelligence: Review of empirical research

Ella Glikson, Anita Williams Woolley

Research output: Contribution to journalArticlepeer-review

1189 Scopus citations

Abstract

Artificial intelligence (AI) characterizes a new generation of technologies capable of interacting with the environment and aiming to simulate human intelligence. The suc-cess of integrating AI into organizations critically depends on workers’ trust in AI technology. This review explains how AI differs from other technologies and presents the existing empirical research on the determinants of human “trust” in AI, conducted in multiple disciplines over the last 20 years. Based on the reviewed literature, we identify the form of AI representation (robot, virtual, and embedded) and its level of machine intelligence (i.e., its capabilities) as important antecedents to the development of trust and propose a framework that addresses the elements that shape users’ cognitive and emotional trust. Our review reveals the important role of AI’s tangibility, transparency, reliability, and immediacy behaviors in developing cognitive trust, and the role of AI’s anthropomorphism specifically for emotional trust. We also note several limitations in the current evidence base, such as the diversity of trust measures and overreliance on short-term, small sample, and experimental studies, where the development of trust is likely to be different than in longer-term, higher stakes field environments. Based on our review, we suggest the most promising paths for future research.

Original languageEnglish
Pages (from-to)627-660
Number of pages34
JournalAcademy of Management Annals
Volume14
Issue number2
DOIs
StatePublished - Jul 2020

Bibliographical note

Publisher Copyright:
© Academy of Management Annals.

Funding

The work on this article was sponsored by the Defense Advanced Research Projects Agency and the Army Research Office, and was accomplished under grant number W911NF-17-1-0104 and W911NF-20-1-0006. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Army Research Office, or the U.S. government. The U.S. government is authorized to reproduce and distribute reprints for government purposes, notwithstanding any copyright notation herein. The work on this article was sponsored by the Defense Advanced Research Projects Agency and the Army Research Office, and was accomplished under grant number W911NF-17-1-0104 and W911NF-20-1-0006. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Army Research Office, or the U.S. government. The U.S. government is authorized to reproduce and distribute reprints for government purposes, notwithstanding any copyright notation herein. We greatly appreciate the constructive and thoughtful feedback provided by Associate Editor Sharon Parker.

FundersFunder number
U.S. Government
Army Research OfficeW911NF-17-1-0104, W911NF-20-1-0006
Defense Advanced Research Projects Agency

    Fingerprint

    Dive into the research topics of 'Human trust in artificial intelligence: Review of empirical research'. Together they form a unique fingerprint.

    Cite this