Dimension reduction
Daniel W. Heck
Cultural Consensus Theory (CCT) is a statistical approach for aggregating subjective judgments or ratings for which correct answers are not known. In a typical CCT setting, several informants provide judgments for items from a certain knowledge domain (e.g., cultural norms). However, both the correct answers and informants’ competence are unknown. Batchelder and Romney (1986) developed CCT to identify the latent “cultural truth” based only on the observed judgments while accounting for differences in informants’ competence. Here, we extend CCT to two-dimensional continuous data such as geographical coordinates (longitude and latitude) of subjective location judgments on maps. For instance, when asking a group of informants to locate a set of unknown sites (e.g., capital cities), the new CCT-2D model provides estimates of the inferred locations and the informants’ abilities. For this purpose, we develop a joint model of longitude and latitude judgments (i.e., x- and y-coordinates) which assumes a common cultural truth of item locations and accounts for correlated errors with respect to the x- and y-coordinates. The CCT-2D model is tested using both simulated and empirical data, showing that the resulting aggregate location estimates are more accurate than those obtained by unweighted averaging.
Niek Stevenson
Dr. Quentin Gronau
Prof. Andrew Heathcote
Prof. Birte Forstmann
Dr. Scott Brown
Steven Miletić
Joint modelling of behaviour and neural activation poses the potential for a significant advance to methods of linking brain and behaviour. However, methods of joint modelling have been limited by difficulties in estimation, often due to high dimensionality and simultaneous estimation challenges. In this talk, we present a method of model estimation which allows for a significant dimensionality reduction using factor analysis at the group level in a Bayesian hierarchical model based estimation framework. The method is based on the particle metropolis within Gibbs sampling algorithm (Gunawan, Hawkins, Tran, Kohn, & Brown, 2020) - which is robust and reliable - with changes implemented to the standard ‘pmwg’ R package. Additionally, we briefly highlight several alternate solutions to the dimensionality problem. Although we focus on joint modelling methods, this model based estimation approach could be used for any high dimensional modelling problem. We provide open source code and accompanying tutorial documentation to make the method accessible to any researchers.
Dr. Yiyun Shou
Dr. Junwen Chen
Bruce Christensen
To date, little is known about the role of social anxiety in the assignment of evidence weights which could contribute to the jumping-to-conclusion bias. The present study used a Bayesian computational method to understand the mechanism of jumping-to-conclusion bias in social anxiety, specifically through the assignment of weights to information sampled. The present study also investigated the specificity of the jumping-to-conclusion bias in social anxiety using three variations of beads tasks that consisted of neutral and socially threatening situations. A sample of 210 participants was recruited from online communities to complete the beads tasks and a set of questionnaires measuring the trait variables including social anxiety and the fears of positive and negative evaluation. The Bayesian model estimations indicated that social anxiety and fears of evaluation significantly biased the assignment of evidence weights to information received in certain conditions of the beads tasks. Our results indicated that social anxiety and fear of evaluation could influence belief updating depending on situations. However, the influences from these trait variables seemed to be insufficient in contributing to the jumping-to-conclusion bias.
Dr. Jeffrey Rouder
Joachim Vandekerckhove
A common method to localize cognitive processes is Donders' subtractive method. For example, to localize inhibition in the Stroop task, performance in a congruent condition is subtracted from that in an incongruent condition. Many cognitive tasks purport to measure inhibition this way. A critical question is whether individual difference scores correlate across these tasks. We find that they do not. Inhibition response time difference scores correlate weakly at best, often below .1 in value. We revisit three large-scale data sets and find that overall task response times do correlate at over .5 in value. This result implies that participants are consistently fast or slow to respond across these tasks. The main source of individual variation is not inhibition, but rather overall or general speed. We explore the dimensionality and structure of general speed across individuals and tasks in extended data sets. With several tasks per set, it is possible to ask whether there is a unified general speed versus several varieties of general speed. A principal component analysis (PCA) revealed a strong first factor in all sets, consistent with a unidimensional, unified construct of general speed. One way of contextualizing these results is to compare them to human anthropometrics. While human bodies are similar in many ways, they seemingly vary on a “size” factor. We analyze a publicly available set of 93 body measurements collected across 6,068 US military personnel. Indeed, a strong first factor of size emerges, but so does a second factor that captures how heavy people are for their height. Perhaps surprisingly, the first-factor solution for general speed is comparable to or even stronger than it is for anthropometrics. Moreover, we were unable to identify a coherent second factor for general speed. We conclude that general speed is likely unidimensional.
Julien Musolino
Nipun Arora
Researchers are often faced with novel and complex problems requiring interdisciplinary solutions. However, interdisciplinary research requires integrating previously unrelated concepts across different fields—a task that involves discovering and processing very large quantities of information. Given the nature of the challenge, big data and machine learning tools naturally come to mind as potential solutions. Here, we present an approach that automates the discovery of relevant literature and uses machine intelligence to identify fine-grained semantic relationships embedded within thousands of articles. Specifically, we apply this technique to the case of human agency. In doing so, we aim to fill a critical gap in that broader literature, namely the absence of an account of agency that integrates the sociological and psychological natures of the phenomenon. We programmatically scanned 6 databases using the keywords ‘human agency’. The automated method mined 2700+ full papers across 9 different disciplines. We then used Latent Dirichlet Allocation—a Bayesian machine learning technique—to identify 54 topics present in this corpus. PCA was used to distribute the topics on the semantic space to visualize the lay of the land. Rendering these in a networked representation allowed us to locate specific cross-disciplinary relationships from a haystack of literature without having to manually read nearly 3,000 papers. Finally, the trained model was used to quantify intersectionality within each paper that helped us identify key articles. Our method enables researchers to discover a broad and exhaustive corpus of relevant literature, quickly develop a big-picture understanding from it, and discover deep, interdisciplinary connections. Being automated, the approach ameliorates selection biases. The approach is also sensitive to different conceptualizations of the same word which makes it particularly suited to process interdisciplinary literature. Finally, the method is topic-neutral, and therefore broadly applicable. We have published its codebase on Github for the wider community.
Submitting author
Author