Exploring human representations of facial affect by integrating a deep generative model into Markov Chain Monte Carlo With People
People’s internal representations of natural categories play a crucial role in explaining and predicting how people perceive, learn, and interact with the world. One of the most powerful methods for estimating these representations is Markov Chain Monte Carlo with People (MCMCP) which uses pairwise decisions to sample from very complex category representations. Unfortunately, MCMCP requires a large number of trials to converge, particularly for high-dimensional stimuli such as faces. To address this shortcoming, we integrate a deep generative model, specifically a Variational Auto-Encoder (VAE), into MCMCP, which reduces the dimensionality of the search space and accelerates convergence by using VAE’s implicit knowledge of natural categories. VAE provides MCMCP with a compact and informative representation space via a non-linear encoder, and then focuses human decisions in areas of the representation where the VAE believes the category to be. Otherwise, MCMCP would search in a highly sparse representation space aimlessly until reach its target areas with greater gradients, which is typically lengthy. To test this approach, we ran a new experiment applying VAE-guided MCMCP to recovering people’s representations of happy and sad faces. While past applications of MCMCP to facial affect categories have required chaining across participants, consuming thousands of pairwise decisions before obtaining representative estimates of the means of the two categories, VAE-guided MCMCP converges on an individual’s representation within a single session of less than 150 trials, making MCMCP much more feasible. The study not only provides a method that enables MCMCP to uncover human representations of natural categories more efficiently at individual level, but also provides an innovative and generalizable framework that uses deep neural networks to enhance research into human internal representations.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: