Data quality of platforms and panels for online behavioral research

Eyal Peer, David Rothschild, Andrew Gordon, Zak Evernden, Ekaterina Damer

Research output: Contribution to journalArticlepeer-review

149 Scopus citations


We examine key aspects of data quality for online behavioral research between selected platforms (Amazon Mechanical Turk, CloudResearch, and Prolific) and panels (Qualtrics and Dynata). To identify the key aspects of data quality, we first engaged with the behavioral research community to discover which aspects are most critical to researchers and found that these include attention, comprehension, honesty, and reliability. We then explored differences in these data quality aspects in two studies (N ~ 4000), with or without data quality filters (approval ratings). We found considerable differences between the sites, especially in comprehension, attention, and dishonesty. In Study 1 (without filters), we found that only Prolific provided high data quality on all measures. In Study 2 (with filters), we found high data quality among CloudResearch and Prolific. MTurk showed alarmingly low data quality even with data quality filters. We also found that while reputation (approval rating) did not predict data quality, frequency and purpose of usage did, especially on MTurk: the lowest data quality came from MTurk participants who report using the site as their main source of income but spend few hours on it per week. We provide a framework for future investigation into the ever-changing nature of data quality in online research, and how the evolving set of platforms and panels performs on these key aspects.

Original languageEnglish
Pages (from-to)1643-1662
Number of pages20
JournalBehavior Research Methods
Issue number4
StatePublished - Aug 2022
Externally publishedYes

Bibliographical note

Funding Information:
We wish to acknowledge the fact that the research project was done in collaboration with members from Prolific (ED is co-founder and CEO of Prolific; DR serves on the Prolific advisory board; ZE and AG are employed by Prolific), and the study was funded by Prolific. However, we have taken several precautions to mitigate any potential conflict of interest. First, the study was pre-registered on the Open Science Framework, and all materials and data are available at . This enables outside researchers to corroborate our analyses and findings, as well as easily conduct replication studies or extensions of this study to other platforms or aspects of data quality. Second, the studies on all platforms were run by the first author (EP) who is not affiliated with Prolific in any way and did not receive any financial or other compensation. EP was also responsible for planning the research design, analyzing the data, drafting, revising and submitting the final manuscript. Other authors’ contributions were as follows: DR took part in the research design and revising the surveys, pre-registering the study, checking the data analyses and revising the manuscript; ZE assisted with the research design, conducting the preliminary survey, and writing the manuscript; AG ran statistical analyses and contributed to writing the manuscript; ED assisted with the research design and provided feedback on the manuscript.

Publisher Copyright:
© 2021, The Psychonomic Society, Inc.


  • Amazon mechanical turk
  • Attention
  • Comprehension
  • Data quality
  • Honesty
  • Online research
  • Prolific
  • Reliability


Dive into the research topics of 'Data quality of platforms and panels for online behavioral research'. Together they form a unique fingerprint.

Cite this