We are constantly exposed to multiple visual scenes, and while freely viewing them without an intentional effort to memorize or encode them, only some are remembered. It has been suggested that image memory is influenced by multiple factors, such as depth of processing, familiarity, and visual category. However, this is typically investigated when people are instructed to perform a task (e.g., remember or make some judgment about the images), which may modulate processing at multiple levels and thus, may not generalize to naturalistic visual behavior. Visual memory is assumed to rely on high-level visual perception that shows a level of size invariance and therefore is not assumed to be highly dependent on image size. Here, we reasoned that during naturalistic vision, free of task-related modulations, bigger images stimulate more visual system processing resources (from retina to cortex) and would, therefore, be better remembered. In an extensive set of seven experiments, naïve participants (n = 182) were asked to freely view presented images (sized 3° to 24°) without any instructed encoding task. Afterward, they were given a surprise recognition test (midsized images, 50% already seen). Larger images were remembered better than smaller ones across all experiments (∼20% higher accuracy or ∼1.5 times better). Memory was proportional to image size, faces were better remembered, and outdoors the least. Results were robust even when controlling for image set, presentation order, screen resolution, image scaling at test, or the amount of information.While multiple factors affect image memory, our results suggest that low- to high-level processes may all contribute to image memory.
|Journal||Proceedings of the National Academy of Sciences of the United States of America|
|State||Published - 25 Jan 2022|
Bibliographical noteFunding Information:
ACKNOWLEDGMENTS. We thank Yulia Golland, Nurit Gronau, Daniel Levy, Yoram Bonneh, Rafi Malach, and Ifat Levy for discussions and suggestions and Yuri Maximov for technical assistance. This work was supported by Israel Science Foundation Grant 1458/18 (to S.G.-D.).
© 2022 National Academy of Sciences. All rights reserved.