Beyond words: Evidence for automatic language-gesture integration of symbolic gestures but not dynamic landscapes

Dana Vainiger, Ludovica Labruna, Richard B. Ivry, Michal Lavidor

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Understanding actions based on either language or action observation is presumed to involve the motor system, reflecting the engagement of an embodied conceptual network. We examined how linguistic and gestural information were integrated in a series of cross-domain priming studies. We varied the task demands across three experiments in which symbolic gestures served as primes for verbal targets. Primes were clips of symbolic gestures taken from a rich set of emblems. Participants responded by making a lexical decision to the target (Experiment 1), naming the target (Experiment 2), or performing a semantic relatedness judgment (Experiment 3). The magnitude of semantic priming was larger in the relatedness judgment and lexical decision tasks compared to the naming task. Priming was also observed in a control task in which the primes were pictures of landscapes with conceptually related verbal targets. However, for these stimuli, the amount of priming was similar across the three tasks. We propose that action observation triggers an automatic, pre-lexical spread of activation, consistent with the idea that language-gesture integration occurs in an obligatory and automatic fashion.

Original languageEnglish
Pages (from-to)55-69
Number of pages15
JournalPsychological Research
Volume78
Issue number1
DOIs
StatePublished - Jan 2014

Bibliographical note

Funding Information:
Acknowledgments This study was supported by the BSF Grant 2007184 awarded to R. Ivry and M. Lavidor.

Fingerprint

Dive into the research topics of 'Beyond words: Evidence for automatic language-gesture integration of symbolic gestures but not dynamic landscapes'. Together they form a unique fingerprint.

Cite this