Abstract
Speaking with conversational AIs, technologies whose interfaces enable human-like interaction based on natural language, has become a common phenomenon. During these interactions, people form their beliefs due to the say-so of conversational AIs. In this paper, I consider, and then reject, the concepts of testimony-based beliefs and instrument-based beliefs as suitable for analysis of beliefs acquired from these technologies. I argue that the concept of instrument-based beliefs acknowledges the non-human agency of the source of the belief. However, the analysis focuses on perceiving signs and indicators rather than content expressed in natural language. At the same time, the concept of testimony-based beliefs does refer to natural language propositions, but there is an underlying assumption that the agency of the testifier is human. To fill the lacuna of analyzing belief acquisition from conversational AIs, I suggest a third concept: technology-based beliefs. It acknowledges the non-human agency-status of the originator of the belief. Concurrently, the focus of analysis is on the propositional content that forms the belief. Filling the lacuna enables analysis that considers epistemic, ethical, and social issues of conversational AIs without excluding propositional content or compromising accepted assumptions about the agency of technologies.
Original language | English |
---|---|
Pages (from-to) | 1031-1047 |
Number of pages | 17 |
Journal | Episteme |
Volume | 21 |
Issue number | 3 |
DOIs | |
State | Published - 1 Sep 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© The Author(s), 2023.
Keywords
- AI
- anthropomorphism
- chatbots
- conversational AIs
- Large Language Models
- personal virtual assistants
- technology-based beliefs
- testimony
- testimony-based beliefs