Multilingual Semantic Distance: Automatic Verbal Creativity Assessment in Many Languages

John D. Patterson, Hannah M. Merseal, Dan R. Johnson, Sergio Agnoli, Matthijs Baas, Brendan S. Baker, Baptiste Barbot, Mathias Benedek, Khatereh Borhani, Qunlin Chen, Julia F. Christensen, Giovanni Emanuele Corazza, Boris Forthmann, Maciej Karwowski, Nastaran Kazemian, Ariel Kreisberg-Nitzav, Yoed N. Kenett, Allison Link, Todd Lubart, Maxence MercierKirill Miroshnik, Marcela Ovando-Tellez, Ricardo Primi, Rogelio Puente-Díaz, Sameh Said-Metwaly, Claire Stevenson, Meghedi Vartanian, Emannuelle Volle, Janet G. van Hell, Roger E. Beaty

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Creativity research commonly involves recruiting human raters to judge the originality of responses to divergent thinking tasks, such as the alternate uses task (AUT). These manual scoring practices have benefited the field, but they also have limitations, including labor-intensiveness and subjectivity, which can adversely impact the reliability and validity of assessments. To address these challenges, researchers are increasingly employing automatic scoring approaches, such as distributional models of semantic distance. However, semantic distance has primarily been studied in English-speaking samples, with very little research in the many other languages of the world. In a multilab study (N= 6,522 participants), we aimed to validate semantic distance on the AUT in 12 languages: Arabic, Chinese, Dutch, English, Farsi, French, German, Hebrew, Italian, Polish, Russian, and Spanish. We gathered AUT responses and human creativity ratings (N= 107,672 responses), as well as criterion measures for validation (e.g., creative achievement).We compared two deep learning-based semantic models—multilingual bidirectional encoder representations from transformers and cross-lingual language model RoBERTa—to compute semantic distance and validate this automated metric with human ratings and criterion measures. We found that the top-performing model for each language correlated positively with human creativity ratings, with correlations ranging from medium to large across languages. Regarding criterion validity, semantic distance showed small-to-moderate effect sizes (comparable to human ratings) for openness, creative behavior/achievement, and creative self-concept. We provide open access to our multilingual dataset for future algorithmic development, along with Python code to compute semantic distance in 12 languages.

Original languageEnglish
Pages (from-to)495-507
Number of pages13
JournalPsychology of Aesthetics, Creativity, and the Arts
Volume17
Issue number4
DOIs
StatePublished - 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2023 American Psychological Association

Keywords

  • creativity assessment
  • cross-linguistic analysis
  • distributional semantic modeling
  • natural language processing
  • semantic distance

Fingerprint

Dive into the research topics of 'Multilingual Semantic Distance: Automatic Verbal Creativity Assessment in Many Languages'. Together they form a unique fingerprint.

Cite this