Estimation
You must be logged in and registered to see live session information.
By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
The memory measurement model (M3; Oberauer & Lewandowsky, 2018) is a cognitive measurement model designed to isolate parameters associated with different processes in working memory. It assumes that different categories of representations in working memory get activated through distinct processes. Transforming the activation of the different item categories into their respective recall probabilities then allows to estimate the contributions of different memory processes to working memory performance. So far, parameter recovery was assessed only for group level parameters of the M3. In contrast to experimental research, individual differences research relies on variation in subject parameters. The quality of parameter recovery of subject parameters has, however, not yet been investigated. To analyze parameter recovery of subject parameters of the M3, we ran a parameter recovery simulation to assess the model performance in recovering subject-level parameters dependent on different experimental conditions. In this talk, we will present the results of this parameter recovery study that used a multivariate parametrization of the model implemented in STAN using the no-u-turn sampler (Hoffman & Gelman, 2011). The results of the simulation indicate that our implementation of the M3 recovers subject parameters acceptably. Based on differences between experimental conditions, we will provide recommendations for using the M3 in individual differences research. Altogether, our parameter recovery study showed that the M3 is easily scalable to different experimental paradigms with sufficient recovery performance.
The Ratcliff diffusion decision model (DDM) is the most prominent model for jointly modelling binary responses and associated response times. One hurdle in estimating the DDM is that the probability density function (PDF) contains an infinite sum for which several different approximations exist. We present a novel method for approximating this PDF, implemented in C++ but using the R package Rcpp to provide an R language interface to the faster C++ language. In addition to our novel approximation method, we also compiled all known approximation methods for the PDF (with fixed and variable drift rate), including previously unused combinations of techniques found in the relevant literature. We ported these approximation methods to C++ and optimized them to run in this new language. Given an acceptable error tolerance in the value of the PDF approximation, we benchmarked all of these approximation methods to compare their speed against each other and also against commonly used R functions from the literature. The results of these tests show that our novel approximation method is not only orders of magnitude faster than the current standards, but it is also faster than all of the other approximation methods available even after translation and optimization to the faster C++ language. All of these approximation methods are bundled in the R package fddm; this package is available via CRAN, and the source code is available on GitHub.
Cognitive modelling results should be robust across reasonable data-analysis decisions. For parameter estimation, two essential decisions concern the aggregation of data (e.g., complete pooling or partial pooling) and the statistical framework (frequentist or Bayesian). The combination of these decision options spans a multiverse of estimation methods. We analysed a) the magnitude and b) possible sources of divergence between different parameter estimation methods for nine popular multinomial processing tree (MPT) models (e.g., source monitoring, implicit attitudes, hindsight bias). We synthesized data from 13,956 participants (from 142 published studies), and examined divergence in core model parameters between nine estimation methods that adopt different levels of pooling within different statistical frameworks. Divergence was partly explained by uncertainty in parameter estimation (larger standard error = larger divergence), the value of the parameter estimate (parameter estimate bear the boundary = larger divergence), and structural dependencies between parameters (larger maximal parameter trade-off = larger divergence). Notably, divergence was not explained by participant heterogeneity - a result that is unexpected given the previous emphasis on heterogeneity when choosing particular estimation methods over others. Instead, our synthesis suggests that other, idiosyncratic aspects of the MPT models also play a role. To increase transparency of MPT modelling results, we propose to adopt a multiverse approach.
A large number of researchers agree that people can detect regularities in their environment and adapt behavior accordingly in the absence of awareness. The presumed unconscious effect of stimuli, contingencies, or rules on learning has been shown in a variety of paradigms (e.g., repetition priming, contextual cueing, unconscious conditioning, artificial grammar learning). Evidence that learning was indeed unconscious sometimes requires accepting the null hypothesis that participants were unaware of the regularities (indirect-without-direct-effect data pattern). As null-hypothesis significance testing is a poor method for proving the absence of an effect, one can regress the learning measure on the awareness measure, so that a significant intercept would be understood as successful learning without awareness (Greenwald, Klinger, & Schuh, 1995). However, the relationship between predictor and criterion variable is frequently biased by their respective low reliabilities. In particular, ignoring measurement error in the predictor variable will disattenuate the regression slope towards zero, which in turn could raise a true zero intercept above zero. As a solution, Klauer, Draine, and Greenwald (1998) suggested a correction method for predictor variables with rational zero points (such as d’) in the framework of errors-in-variables regression. In a series of simulations, we show that their method still overestimates true zero intercepts. As an alternative, we suggest that researchers (a) use a generative Bayesian regression approach that takes the uncertainty of predictor and criterion variable into account and (b) calculate Bayes factors to test the crucial intercept.
Many decision-making theories are encoded in a class of processes known as evidence accumulation models (EAM). These assume that noisy evidence stochastically accumulates until a set threshold is reached, triggering a decision. One of the most successful and widely used of this class is the drift-diffusion model (DDM). The DDM however is limited in scope and does not account for processes such as evidence leakage, changes of evidence, or time varying caution. More complex EAMs can encode a wider array of hypotheses, but are currently limited by the computational challenges. In this work, we develop the python package PyBEAM (Bayesian Evidence Accumulation Models) to fill this gap. Toward this end, we develop a general probabilistic framework for predicting the choice and response time distributions for a general class of binary decision models. In addition, we have heavily computationally optimized this modeling process and integrated it with PyMC3, a widely used python package for Bayesian parameter estimation. This 1) substantially expands the class of EAM models to which Bayesian methods can be applied, 2) reduces the computational time to do so, and 3) lowers the entry fee for working with these models. I will demonstrate the concepts behind this methodology, its application to parameter recovery for a variety of models, and apply it to a recently published data set to demonstrate its practical use.