PyBEAM: A Bayesian approach to parameter inference for a wide class of binary evidence accumulation models.
Many decision-making theories are encoded in a class of processes known as evidence accumulation models (EAM). These assume that noisy evidence stochastically accumulates until a set threshold is reached, triggering a decision. One of the most successful and widely used of this class is the drift-diffusion model (DDM). The DDM however is limited in scope and does not account for processes such as evidence leakage, changes of evidence, or time varying caution. More complex EAMs can encode a wider array of hypotheses, but are currently limited by the computational challenges. In this work, we develop the python package PyBEAM (Bayesian Evidence Accumulation Models) to fill this gap. Toward this end, we develop a general probabilistic framework for predicting the choice and response time distributions for a general class of binary decision models. In addition, we have heavily computationally optimized this modeling process and integrated it with PyMC3, a widely used python package for Bayesian parameter estimation. This 1) substantially expands the class of EAM models to which Bayesian methods can be applied, 2) reduces the computational time to do so, and 3) lowers the entry fee for working with these models. I will demonstrate the concepts behind this methodology, its application to parameter recovery for a variety of models, and apply it to a recently published data set to demonstrate its practical use.
Keywords
Very interesting work! When recovering the parameters of the classic DDM, there seemed to be some bias in the parameter recovery. Although the posterior distributions include the true parameters, the modes (and likely means) were different to the true parameters. Is this true for just a particular dataset, or does this hold over many datasets? And ...
Cite this as: