Using multidimensional scaling and convolutional neural networks to probe mental representations of complex images
Understanding the mechanisms of anxiety disorders requires an understanding of how fear-inducing stimuli are mentally represented. Because similarity is central to recognizing objects and structuring representations, similarity judgment data are often used in cognitive models to reveal psychological dimensions of mental representations. However, both collecting similarity data and predicting the positions of newly added objects in the existing database are resource-intensive. Thus, previous studies mainly focused on small-scale databases, and characterizing representations for large-scale fearful stimuli is still limited. In this work, we conducted an online experiment using a large image database of 314 spider-relevant images to collect similarity judgments. Participants first completed the Fear of Spider Questionnaire (FSQ). We then used a rejection sampling method to select participants and ensure that the resulting FSQ scores were uniformly distributed. Next, selected participants performed the Spatial Arrangement Task, in which they arranged spider images on a 2D canvas according to the subjective similarity between each pair of images. With the collected data, metric multidimensional scaling (MDS) was applied to create low-dimensional embeddings. We compared Bayesian information criterion and cross-validation as model selection procedures in a simulation and these two methods were used to determine the dimensionality. We then reproduced these embeddings and predicted the positions of new images using convolutional neural networks (CNNs). Taken together, this work explores the application of MDS and CNNs to large-scale complex images for the first time, and the methodology employed could be applied to a wide range of stimuli in psychological research.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: