Decision Making
Stephen Broomell
Forecasts are generated by both human experts and statistical models, and their forecast accuracy can be understood using error decompositions. However, the assumptions that underlie decompositions used in the analysis of human error differ substantially from those used in the analysis of models. The lens model, one of the most popular error decompositions for human errors, treats the beliefs of the human forecaster as fixed parameters to be estimated. Modern decompositions of model error treat the model as a random result from the process of fitting to noisy data. We highlight how these different approaches can be combined, expanding the application of the lens model to groups and opening up new perspectives on the study of human forecasting. We argue that treating human beliefs as the result of a process of learning from noisy data (even without specifying that process) can help to explain many documented phenomena in the world of forecasting such as: what kinds of environments human judgment will have difficulty with and what kinds they will be successful in; what conditions underlie the success of bootstrapping and aggregation of independent forecasts. Just as understanding statistical models as random variables has helped to improve the understanding of error in statistics and machines learning, we believe this framework will be able to help guide the literature on human judgment to a better understanding of error, its determinants and the mechanisms capable of improving forecasting accuracy.
Models of continuous response tasks have been gainfully applied across a variety of perceptual and preferential choice paradigms. One benefit of these approaches is that they can serve as a general case of binary and multi-alternative choice models by dividing the continuum of evidence they produce into discrete regions corresponding to separate responses. Using these more general approaches elucidates “hidden” mechanisms of binary and multiple choice models that are often built into drift rates, thresholds, or parameter variability. In this talk, I present three empirical phenomena related to these hidden mechanisms, examining the constraints that they place on continuous models being applied to multi-alternative and binary choice. First, different choice options should be able to have different stopping rules (thresholds) based on their degree of similarity to other alternatives in the choice set. Second, continuous models must contain different mechanisms for different “drift rate” manipulations such as stimulus coherence, stimulus-response match, and the discriminability (confusability) of different response options. And third, continuous models must be able to store evidence for response options that are outside the initial choice set and map it onto new response options when they appear during a trial. Each of these constraints is imposed by an empirical phenomenon: participants in three experiments showed greater accuracy and faster response times for dissimilar response alternatives in a set; diverging effects of discriminability, coherence, and match manipulations; and efficient re-mapping of evidence when new choice options were introduced partway through an experimental trial.
Rui Ponte Costa
Jeff Bowers
Casimir Ludwig
Gaurav Malhotra
Evidence integration models such as the Drift-diffusion model (DDM) are extremely successful in accounting for reaction time distributions and error rates in decision making. However, these models do not explain how evidence, represented by the drift, is extracted from the stimuli. Models of low-level vision, such as template-matching models, propose mechanisms by which evidence is generated but do not account for RT distributions. We propose a model of the perceptual front-end, implemented as Deep Generative Model, that learns to represent visual inputs in a low-dimensional latent space. Evidence in favour of different choices can be gathered by sampling from these latent variables and feeding them to an integration-to-threshold model. Under some weak assumptions this architecture implements an SPRT test. Therefore, it can be used to provide an end-to-end computational account of reaction-time distributions as well as error-rates. In contrast to DDMs, this model can explain how drift and diffusion rates arise rather than infer them from behavioural data. We show how to generate predictions using this model for perceptual decisions in visual noise and how these depend on different architectural constraints and the learning history. The model thus provides both an explanation of how evidence is generated from any given input and how architectural constraints and learning affect this process. These effects can then be measured through the observed error rates and reaction-time distributions. We expect this approach to allow us to bridge the gap between the complementary, yet rarely interacting literature of decision-making, visual perceptual learning and low-level vision/psychophysics.
Mark Pitt
Prof. Jay I. Myung
Delay discounting is a preferential choice task that measures the rate at which individuals discount future rewards. Many models that have been proposed for this task fail to describe the full range of behavior that can be exhibited by participants. One reason for this is that most models assume a simple monotonic relationship in which future rewards are discounted as the delay increases. The lack of flexibility of these models can potentially expose experiments to biases introduced by model misspecification. Addressing this problem, we propose a nonparametric Bayesian approach (Gaussian Process with active learning: GPAL), for modeling delay discounting. A Gaussian Process model is fit to data while simultaneously selecting highly informative experimental designs in each trial based on responses from earlier trials. Results show that GPAL is an efficient and reliable framework that is capable of capturing patterns that prominent models are insensitive to. In particular, we identified two of these patterns that were systematically observed in our data and analyze them in detail. These patterns reveal properties that violate common normative assumptions made by virtually all parametric models of discounting and therefore have been rarely discussed in the literature. We offer possible explanations that could account for these abnormal choice behaviors and propose enhancements to existing parametric models motivated by these explanations.
Submitting author
Author