Evidence Accumulation: Race Models
Brian Odegaard
Prof. Andrew Heathcote
Dr. Peter Kvam
We evaluate the metacognitive properties of confidence and how response scales of different resolution affect it. Specifically, we used the Multiple Threshold Race model (MTR; Reynolds et al., 2021) to understand the placement of confidence boundaries and how they change across scales of different resolutions. In two studies, we tested how accurate people's confidence judgements are on a simple perceptual task: participants were asked to evaluate whether there were more blue or orange dots in a dynamic cluster. To understand the impact of response scales, participants rate how confident they are about that judgement. We manipulated the scale resolution so that it had 3, 4, 5, 6, 11, or 21 levels, as well as included a scale with continuous resolution. We also manipulated the difficulty of the task as well as included a speed vs. accuracy manipulation (whether speedier or more accurate responses was encouraged). Results show that reaction times follow Hick’s law (1952) under standard conditions but violate the law under time pressure. Difficulty also affected participant responses: people were generally overconfident in high-difficulty conditions, but their overconfidence decreased as the resolution of the response scale increased. By modeling the data with the MTR model, we aim to understand the cognitive processes that constitute confidence judgements.
This is an in-person presentation on July 20, 2024 (10:00 ~ 10:20 CEST).
Hans Colonius
Dr. Adele Diederich
The stop signal task is a popular tool for studying response inhibition. Participants perform a response time task (go task) and, occasionally, the go stimulus is followed by a stop signal after a variable delay, indicating that subjects should withhold their response (stop task). In the stimulus-selective version of the task, two different signals can be presented after the go signal, and subjects must stop if one of them occurs (stop signal), but not if the other occurs (ignore signal). A major challenge in modeling is the unobservability of stop signal processing if stopping is successful. In the dominant model, performance is hypothesized as a race between two stochastically independent random variables representing go and stop signal processing. An important prediction of all independent race models is that the distribution of reaction times to the go signal, without a stop signal being present, lies below the go signal distribution when a stop signal is presented after a certain time interval (stop signal delay). In previous work based on the statistical concept of copula, we have shown that observed violations of this prediction can be accounted for by dropping the stochastic independence assumption (Colonius, Jahansa, Joe & Diederich, CBB 2023). Here, we present further results on the distribution inequality for stochastically dependent race models with different types of marginal distributions and corresponding copulas.
This is an in-person presentation on July 20, 2024 (10:20 ~ 10:40 CEST).
Dr. Jamal Amani Rad
In decision-making, the Levy flights model (LFM), an extension of the diffusion decision model, adopts a heavy-tailed distribution with the pivotal 'alpha' parameter controlling the shape of the tail. This study critically examines the theoretical foundations of alpha, emphasizing that its test-retest reliability is essential to classify it as a cognitive style measure. Our analysis confirms the alpha parameter's test-retest reliability across various occasions and tasks, supporting its role as a trait-like characteristic. The study also explores LFM parameter interrelations, despite low correlation among the other parameters (so representing distinct aspects of data), there is a pattern of strong correlation between alpha and threshold. Investigating the practice effect, our analyses indicate a consistent decrease in non-decision time, threshold, and often alpha across sessions, alongside the drift-rate increase. We also employ Bayesflow for parameter estimation, evaluating its precision with different trial counts. These findings provide valuable guidelines for future LFM research.
This is an in-person presentation on July 20, 2024 (10:40 ~ 11:00 CEST).
In the last decades, the diffusion model (Ratcliff, 1978) has become a standard model for fast binary decisions, as it is able to map data from many different cognitive tasks. The diffu-sion model assumes that binary decisions are based on continuous evidence accumulation with constant drift and Gaussian noise. However, recently, it has been suggested that mod-els with heavy-tailed noise distributions provide better fit especially for fast perceptual deci-sions. These so-called Levy-Flight Models of decision making are characterized by jumps in evidence accumulation. In the present study, the goodness-of-fit of the standard diffusion model and the Levy Flight model are compared for four different tasks. Specifically, partici-pants had to assess the direction of arrows (perceptual task) or the odd/even status of num-bers (numerical task). Both tasks were administered in a single stimulus condition and a multiple stimulus condition, whereat in the latter condition, the task was to indicate the dom-inating stimulus type. Following previous results, we expected more jumpiness in evidence accumulation for the easier conditions (i.e. the arrow task and the single stimulus condition). Results confirmed these assumptions.
This is an in-person presentation on July 20, 2024 (11:00 ~ 11:20 CEST).
Mr. Lukas Schumacher
Stefan Radev
Andreas Voss
Individuals may employ diverse decision-making strategies, and the Levy Flight (LF) model, developed by Voss et al. (2019), accommodates these variations through a fat-tailed process of evidence accumulation. Although the Diffusion Model (DM) is commonly used in modeling binary decision-making, we propose that in certain instances, the LF model could be a more faithful representation of the data-generation process. We aim to investigate whether a bias exists when the true data-generating model is LF and the DM is employed to interpret the data, and vice versa. To investigate this, we conducted an extensive simulation study using simulation-based inference with neural networks as implemented in the BayesFlow framework, an approach suitable for models lacking analytical likelihood functions, as the LF. Another aspect of our study examined the potential biases present in neural network estimates. To assess this, Stan was utilized as a benchmark for the neural estimators. A comparison of parameter estimates for the standard DM between BayesFlow and Stan revealed a close correspondence for both DM and LF data, thereby validating our methodology against a strong baseline. In terms of our substantive question, both BayesFlow and Stan revealed nearly identical estimation biases when fitting the DM to LF data: non-decision time was underestimated, boundary separation and starting point is overestimated in fast responses, and drift rate estimation deteriorated as drift rate increased. These results suggest that neural networks can closely approximate the true posteriors of DM, but these posteriors may exhibit notable biases when estimating the core DM parameters from LF-like data.
This is an in-person presentation on July 20, 2024 (11:20 ~ 11:40 CEST).
Submitting author
Author