Posters: Decision Making & Evidence-Accumulation Models
Hau-Hung Yang
In classical psychophysics, the study of threshold and underlying representations is of theoretical interest, and the relevant issue of finding the stimulus (intensity) corresponding to a certain threshold level is an important topic. In the literature, researchers have developed various adaptive (also known as ‘up-down’) methods, including the fixed step-size and variable step-size methods, for the estimation of threshold. A common feature of this family of methods is that the stimulus to be assigned to the current trial depends upon the participant’s response in the previous trial(s), and very often a Yes-No response format is adopted. A well-known earlier work of the variable step-size adaptive methods is the Robbins-Monro process. However, previous studies have paid little attention to other facets of response variables (in addition to the Yes-No response variable) that could be embedded in the Robbins-Monro process. This study concerns a generalization of the Robbins-Monro process by incorporating other response variables, such as response confidence, into the process. We first prove the consistency of the generalized method and explore possible requirements, under which the proposed method achieves (at least) the same efficiency as the original method does. We then conduct a Monte Carlo simulation study to explore some finite-sample properties of the estimator obtained from the generalized method, and compare its performance with the original method.
Ian Mackenzie
Rolf Ulrich
Hartmut Leuthold
Markus Janczyk
In conflict tasks, such as the Simon, Eriksen flanker, or Stroop task, the congruency effect is often reduced after an incongruent compared to a congruent trial: the congruency sequence effect (CSE). It was suggested that the CSE may reflect increased processing of task-relevant information and/or suppression of task-irrelevant information after experiencing an incongruent relative to a congruent trial. In the present study, we contribute to this discussion by applying the Diffusion Model for Conflict tasks (DMC) framework in the context of CSEs to flanker and Simon tasks. We argue that DMC independently models the task-relevant and task-irrelevant information and thus is a first good candidate for disentangling their unique contributions. As a first approach, we fitted DMC conjointly or separately to previously congruent or incongruent trials, using four empirical flanker and two Simon data sets. For the flanker task, we fitted the classical DMC version. For the Simon task, we fitted a generalized DMC version which allows the task-irrelevant information to undershoot when swinging back to zero. After considering the model fits, we present a second approach, where we implemented a cognitive control mechanism to simulate the influence of increased processing of task-relevant information or increased suppression of task-irrelevant information. Both approaches demonstrate that the suppression of task-irrelevant information is essential to create the typical CSE pattern. Increased processing of task-relevant information, however, could rarely describe the CSE accurately.
Shanqing Gao
When we perceive the world, visual input follows a meaningful structure, which can be considered as “grammar” of the visual scene, as the relationships between scenes have been learnt implicitly from the rules of the world like learning a language. For example, we expect to see scenes of the hallway – rather than a parking lot – when leaving an office. In the present study, we aim to disentangle the effect of expectations on the scene gist recognition process. In each trail of the experiment, participants will see a sequence of visual scenes, where the previous pictures serve as primes for the latter ones. The experimental design comprises three within-subject factors: We manipulate whether there is scene category change at superordinate level (i.e., a change from indoor to outdoor), whether there is basic-level scene change (e.g., a change from office to classroom), and whether there is high or low expectancy for target scenes. In the high expectancy condition, the primes and target will be displayed in a spatial temporal coherent sequence, whereas in the low expectancy conditions, primes and the target will be shown in a randomized sequence. We hypothesize that high expectancy will facilitate gist extraction and thus speed up response times and accuracy of scene categorization. Further, we expect that the facilitation effects result from faster speed of evidence accumulation (i.e., drift rate) in high expectancy than low expectancy. Besides, we also test for an effect of scene category change on the starting point of the diffusion process. To test these hypotheses, we fit diffusion model with parameters depending on different conditions using a hierarchical Bayesian parameter estimation procedure.
Dr. Blair Shevlin
Roger Ratcliff
Prof. Ian Krajbich
The Diffusion Decision Model (DDM) is an effective tool for studying human decision-making across various domains (Krajbich, 2019; Ratcliff et al., 2016). In practice, including across-trial variability parameters allows the model to account for a variety of behavioral patterns, including fast errors, slow errors, and crossover effects (Ratcliff & Rouder, 1998; Ratcliff & Tuerlinckx, 2002; Van Zandt & Ratcliff, 1995). In this study, we are interested in using the DDM to fit data from many participants but with few observations per participant. By doing so, the across-trial variability parameters in the original model then become across-trial participant parameters. Though typically, across-trial variability has been estimated for the drift rate (sv), starting point (sz), and non-decision time (st) parameters (Boehm et al., 2018; Ratcliff & Childers, 2015; Ratcliff & Tuerlinckx, 2002). However, we know different participants have different boundary separation parameter values. To account for that, we modify the DDM to include across-trial variability in boundary separation (sa). Through simulation, we demonstrate that across-trial variability in boundary separation can produce distinct patterns, including fast errors, a reduction in the fastest response quantiles, and an increase in the slowest response quantiles. We next demonstrate the parameter's identifiability by successfully recovering across-trial variability in boundary separation for an extensive set of parameters. Ultimately, this study provides initial support for the feasibility of using across-trial variability in boundary separation to examine group-level parameters using a few observations per participant.
Giorgio Gronchi
Franco Bagnoli
Maria Pia Viggiano
The quantum cognition approach employs the mathematics of quantum theory to develop models of cognition and decision making. This theory predicts the quantum Zeno effect: when the state of a system is observed continuously, the evolution of state slows down because the quantum state is less likely to change if measures are taken within brief intervals. In the quantum cognition framework, the state vector represents the current cognitive state. When a judgement is made the vector collapses onto the corresponding axis. Over time, the vector oscillates moving away from the axis. By asking the same question multiple times, it is observed that the shorter the interval, the nearer the state vector will be to the axis. This implies a higher probability of giving the same response, resulting in a high coherence of judgement. We tested this prediction with two scenarios describing a hypothetical person and asking for a judgement about him/her. We gave two clues about the characteristic of this individual at once for three times. A total of 3241 participants completed the task, online and in person. We manipulated the time interval between each judgment (immediate vs 30 minutes), the availability of the previous responses given by each participant and the social desirability of showing coherence. In both scenarios, we observed an interaction effect between the time interval and the availability of information about previous responses. The coherence was reduced in the case of 30 minutes interval only if information was not available compared to the other conditions. Results are discussed in the light of the comparison between the quantum cognition framework and the classical approach as well as the cognitive processing underlying the coherence of judgements.
Many decision-making tasks are characterized by a combination of diagnostic and non-diagnostic information, yet models of responding and confidence almost exclusively focus on the contribution of diagnostic information (e.g., evidence associated with stimulus discriminability), largely ignoring the contribution of non-diagnostic information. An exception, Baranski and Petrusic’s (1998) doubt-scaling model, predicts a negative relationship between non-diagnostic information and confidence, and between non-diagnostic information and accuracy. In two perceptual-choice tasks, we tested the effects of manipulating non-diagnostic information on confidence, accuracy, and reaction time (RT). In Experiment 1 (N=56), participants viewed a dynamic grid consisting of flashing blue, orange and white pixels and indicated whether the stimulus was predominantly blue or orange (using a response scale ranging from low confidence blue to high confidence orange), with the white pixels constituting non-diagnostic information. Increasing non-diagnostic information reduced both confidence and accuracy, generally slowed RTs, and led to an increase in the speed of errors. Experiment 2 (N=20) was a near exact replication of Experiment 1, however this time participants were not asked to provide a confidence rating. This was to determine whether asking participants to make a decision and provide a confidence rating simultaneously influenced choosing behaviour. Like the first experiment, Experiment 2 found that increasing non-diagnostic information reduced both accuracy and generally slowed RTs (with an increase in the speed of errors), providing further support for the doubt-scaling model of confidence.
Pablo Leon Villagra
Making decisions often requires generating a list of candidate choices from memory and considering the value of the generated candidates before making a decision. That posits a dilemma: at which point should we stop generating candidates and choose from the current options? Models of multiattribute choice suggest that more extended evidence accumulation will result in less noisy evidence, and, as a result, the value assigned to candidate choices should more closely resemble one's true preferences. This account would predict that longer intervals of generating candidate choices should result in higher satisfaction with one's choices. However, research in consumer Psychology has highlighted that, in many cases, having access to more options can be detrimental for choice satisfaction (Scheibehenne et al., 2006). Therefore, it is plausible that considering more candidates when generating potential choices can result in lower choice satisfaction. Here, we perform the first systematic analysis of choice candidate generation and resulting choice satisfaction. We first investigate the relationship between the number of candidates produced before deciding on the resulting choice satisfaction, expecting that people who list more options end up less satisfied with their choice. In a second experiment, we will further explore this effect by manipulating the number of candidate options participants generate before deciding. Finally, we will extend current models of choice (Bhatia et al., 2021) to build a model that can produce diminished choice satisfaction with larger numbers of candidate options. Our approach rests on the idea that longer consideration duration increases the likelihood of more heterogenous candidates, making comparing and evaluating these candidates more complex. Our approach combines current models of memory search with decision-making and choice satisfaction allowing us to shed light on the processes that govern everyday decision-making.
Dr. Randolph Helfrich
Beliefs and expectations, or priors, shape our perception of the environment (Gold & Stocker, 2017). In an ever-changing world, priors must be flexibly and continuously integrated into sensory decision processes to guide adaptive behavior. Nonetheless, its underlying cognitive mechanisms are not well understood. The Drift Diffusion Model (DDM) is a widely used model for studying visual decision-making (Gold & Shadlen, 2007; Ratcliff & Rouder, 1998). Previous studies have shown that priors can increase the starting point of evidence accumulation and the drift rate (Dunovan & Wheeler, 2018; Dunovan, Tremel, & Wheeler, 2014; Thakur, Basso, Ditterich, & Knowlton, 2021). However, these studies often overlook the potential effects of priors on decision threshold and non-decision time parameters. The goal of this study was to dissociate the effects of priors on multiple cognitive mechanisms in visual decisions. Specifically, I tested how the strength of prior beliefs affects: (a) the integration of momentary sensory evidence; (b) the amount of evidence required to decide; (c) pre-stimulus presentation processes; and (d) non-evidence accumulation effects. For the present study, eight participants completed a behavioral task that required tracking the cue validity across trials and using the cue information flexibly. The task combined a reversal learning and a random dot motion discrimination task and involved three main decisions per trial: cue choice, confidence, and motion direction. After choosing one of the two possible cues (orange vs. blue), participants judged how confident (low vs. high) they were that the chosen cue will turn out to be invalid. Then, participants received a direction of motion, and subsequently, participants judged the motion direction of the random dots. The cue direction was displayed with a predetermined but unknown validity. Each participant completed a maximum of 320 trials, which were divided into informative and non-informative blocks. The interval of an informative block varied from 15-30 trials. The validity of the cue in informative blocks was set at 80% or 30%, while the validity of the cues in non-informative blocks was set at 30% for both cues. At the end of each trial, participants received rewards for their motion judgment and cue choice. The reward for the cue choice depended on the confidence reported earlier in the trial. To evaluate the validity of the estimated belief in the prior, we tested whether belief strength is associated with confidence and the true contingency. Belief strength was higher when participants reported high confidence in their cue choice (t(7) = 5.31, p = .001). Furthermore, when belief strength was higher, participants chose the best cue for the block more often than when belief strength was low (t(7) = 24.52, p < .001). Altogether, these findings provide evidence of validity for trial-wise measures of belief strength. Regarding the effect of belief strength (or prior strength), the posterior estimates of the cognitive models show that the strength of belief affects various aspects of visual decision-making. When the cue was valid, stronger beliefs increased the drift rate (rate of evidence accumulation, 95% HDI = [1.09, 2.2]), increased the response bias towards the direction indicated by the cue (95% HDI = [.056, .228]), increased the threshold (amount of evidence needed to reach a decision, 95% HDI = [.019, .33]), and reduced non-decision time (secondary processes involved in the decision execution, 95% HDI = [.06, 11]). In contrast, when the cue was invalid, stronger beliefs had the opposite effects on these parameters. Overall, belief strength modulates the DDM parameters depending on the accuracy of the belief for a given trial. The main goal of this study was to behaviorally dissociate the effect of belief on visual decision-making using trial-wise estimates of belief strength. The effects on drift rate reflect the ramping of activity in parietal regions that scale with the strength of evidence (Hanks et al., 2015). In the present study, the effect of belief strength on the drift rate is congruent with biased evidence sampling driven by post-decisional confidence (Rollwage et al., 2020). The effects on the starting point are usually interpreted as a choice response bias (Dunovan et al., 2014; Dunovan & Wheeler, 2018). The origin of such biases in the starting point can be a result of a tendency to accept belief-congruent evidence, motor preparation (de Lange, Rahnev, Donner, & Lau, 2013), or even an increase in the sensitivity of low-level sensory representations before stimulus presentation (Kok, Failing, & de Lange, 2014). Although DDM does not dissociate between these subcomponents, it is possible to constrain them neurophysiologically (Harris & Hutcherson, 2022). Effects on the evidence accumulation threshold are associated with speed-accuracy trade-offs (Bogacz, Wagenmakers, Forstmann, & Nieuwenhuis, 2010). In the present study, we observed an effect of belief on decision threshold, suggesting that belief strength increases the amount of evidence that needs to be accumulated when the belief is congruent with visual input. This effect might be caused by a compensation mechanism to maintain high accuracy when the belief is invalid for a particular trial. The non-decision time parameter has often been neglected in the literature. Despite its marginalization, it might reflect important processes. For example, the latency of N200 potentials, which is associated with the encoding of visual stimuli, seems to track non-decision times (Nunez, Gosai, Vandekerckhove, & Srinivasan, 2019). The effect of non-decision time found in this study could emerge from the evidence-encoding onset, evidence accumulation onset, or post-decision motor execution time (Kelly, Corbett, & O’Connell, 2021). In the future, we will leverage the temporal dynamics of decision-making using neurophysiological recordings to constrain and dissociate these parameters (Harris & Hutcherson, 2022).
Dr. Dora Matzke
Dr. Michael D. Nunez
Prof. Andrew Heathcote
Amortized Bayesian Inference (ABI) is an emerging technique that improves on the ideas of approximate Bayesian computation by integrating non-parametric model learning via deep learning, only requiring a data-generative model. This makes it a promising approach for cognitive modelling where more complex models often lack algebraic solutions to enable standard Markov-chain Monte Carlo (MCMC) approaches to parameter estimation. However, while simulation studies show promising convergence for cognitive models with ABI, it is not clear which conditions are needed to ensure its usefulness. Furthermore, it is often not clear if marginal and joint posterior estimates are true reflections of Bayesian inference for complex statistical models. The presented research investigates how ABI compares to MCMC-based methods in the context of cognitive models of the stop-signal paradigm. Specifically, we investigated convergence and computational effort in ABI as implemented by BayesFlow (Radev et al., 2020) compared to the MCMC-based BEESTS as implemented by the DMC R package functions (Matzke et al., 2013; Heathcote et al., 2019). We present numerical comparisons for and draw conclusions and take-aways for the application of ABI for (ExGaussian) models for stop-signal detection tasks.
Leendert Van Maanen
A hurdle preventing Evidence Accumulation Models (EAMs) from wide utilization in applied settings, where individuals cannot (or will not) provide many repeated decisions, is their sample size demands. In this project, we investigated whether Bayesian hierarchical modeling approaches offer a solution: We hypothesized that informative prior distributions decrease these sample size demands to numbers that are obtainable in practice. Through a simulation study and a reanalysis of experimental data, we explored the lower limit on the sample size to still reliably estimate individual participants’ data-generating parameters. In the simulation study, we first compared the effects of various sample sizes and types of prior distributions (uninformative prior; informative and accurate prior; informative but inaccurate prior) on the estimation of parameters for Diffusion Decision Models (DDMs), a class of EAMs. Results revealed that several DDM parameters can be recovered with sample sizes as small as 10, if the prior is correct and informative. However, especially for very small sample sizes, the type of prior distribution was critically important. Subsequently, we assessed the effect of sample size on parameter recovery under more realistic circumstances by reanalyzing data from a driving experiment. We tested how well parameters can be recovered based only on a few observations from a single participant if data of the remaining participants provide informative prior distributions. For most assessed DDM parameters (drift rate, boundary separation, and bias, but not non-decision time), we achieved satisfactory levels of parameter recovery with 20 observations. Additionally, we confirmed that we meaningfully updated the prior distributions towards the ground truth by including these 20 observations. This work opens the door for reliable estimation of decision-making processes under real-life circumstances (e.g., when individuals cannot provide many repeated decisions; or when we are interested in real-time estimation of parameter fluctuations to monitor changes in people’s mental states).
Submitting author
Author