Bias, Beliefs, & Errors
Mr. Amirreza Bagherzadehkhorasani
Human errors can have significant consequences in various domains, such as healthcare, transportation, and finance. The ability to identify and mitigate human errors is critical for ensuring the safety and well-being of individuals and organizations. Cognitive models have the potential to inform the designers of systems about the potential for errors. Replacing users with models that simulate users’ behaviors has been a long-standing vision in interface design. Cognitive models can simulate users' cognitive processes and behaviors, but they cannot fully interact with user interfaces and simulate all types of behaviors. These models currently do not have the ability to detect and mitigate human errors, and there is still much progress to be made in incorporating error handling. This work proposed two possible approaches to address the generation and detection of human errors in cognitive models: Deterministic Error Cognitive Model and Automatic Error Cognitive Model. We model user errors in Microsoft Excel. Excel is a widely used software application that plays an important role in various industries, such as data analysis. Users make errors while working with Excel spreadsheets, which can lead to financial losses, errors in data analysis, and other negative outcomes. We utilized existing data from a behavioral study that involved 23 participants performing a spreadsheet task in Excel. We focus on the errors that arise from the participants typing. Understanding the cognitive processes that contribute to user typing errors in Excel can help improve the software's design, training programs for users, and error detection methods. Studying user errors in a task can also identify common mistakes made by users while completing the task and develop strategies to prevent or mitigate them. Additionally, understanding sub-tasks with higher user errors can help us design better task instructions that can reduce the cognitive load on users and reduce the likelihood of errors. The performance of models is compared with human data, and while both models show a high correlation, the Automatic Error Cognitive Model predicts user behavior with a lower error rate. Furthermore, both models operate on the same interface that users interact with, utilizing a vision and motor extension tool called VisiTor (Vision + Motor).
This is an in-person presentation on July 20, 2023 (15:20 ~ 15:40 UTC).
Jennifer Trueblood
Prof. Bill Holmes
Daniel Martin
Andrew Caplin
An accurately labeled dataset is required to train a neural network on a classification task successfully. These labels are typically deterministic, corresponding to some ground truth. During training, a neural network learns an input-output mapping that maximizes the probability of the ground truth label for each stimulus. But what about tasks where ground truth is difficult to obtain? We introduce the use of incentive-compatible belief elicitation for labeling data and training machine learning models. Extending the work of Hasan et al. (2023), we harness the wisdom of the crowd through elicited beliefs, and then evaluate these methods in an experiment in which participants stated their belief that a white blood cell was cancerous for a series of cell images. We then trained different neural networks to classify the white blood cell images, where some networks were trained using deterministically labeled images and others were trained using the probabilistically labeled dataset obtained through elicited beliefs, and compared classification accuracy and calibration across the networks.
This is an in-person presentation on July 20, 2023 (15:40 ~ 16:00 UTC).
Dr. Hawal Shamon
How do people revise their opinions when exposed to a balanced and diverse information diet? By combining a balanced-argument experiment with a computational theory of argument communication we shed new light on this question. Empirical studies repeatedly examined whether or not biased processing of balanced arguments may lead to more extreme attitudes and contribute to polarization tendencies, but empirical evidence remains mixed. Two forces counteract one another in such a balanced-argument setting: first, there is a moderating effect of being exposed to arguments from both sides. Second, there is a polarizing effect of filtering the information mix in favor of existing beliefs (biased processing). Our theoretical model takes into account that biased processing may come in degrees. Drawing on the theory we develop an artificial experiment — a computational miniature of the real one — and analytically derive a response function for the expected attitude changes. This function contains the strength of biased processing (β) as a free parameter. Theoretical analysis reveals a sharp transition from attitude moderation to polarization indicating that small, domain-specific variations in the strength of biased processing may result in qualitatively different patterns of attitude change, both consistent with our theory. In the empirical experiment (N = 1078) individuals are exposed to an equal share of 7 pro and 7 counter arguments regarding 6 different technologies for energy production (for each N > 170). Attitudes are measured before and after exposure. Using this data we estimate the strength of biased processing for the six empirical topics. While the processing bias is in the regime of attitude moderation for gas and biomass, it is significantly higher and in the regime of polarization for coal, wind (onshore and offshore) as well as solar power. If time permits, we will discuss the implications of these results for group deliberation processes.
This is an in-person presentation on July 20, 2023 (16:00 ~ 16:20 UTC).
Dr. Yiyun Shou
Dr. Junwen Chen
Bruce Christensen
Using the classic beads task, some research indicates that individuals with high anxiety possibly make hasty decisions based on less information (i.e., jump to conclusions) relative to healthy participants. However, the mechanisms underlying this psychopathology-related reasoning bias are not well understood. The present study investigated the causal effect of state anxiety on the jumping-to-conclusion bias and explored the underlying reasoning mechanics using the Bayesian computational modelling method, specifically focusing on the assignment of evidence weights. Approximately 50 participants were recruited from a university setting. The participants were randomly allocated to an anxiety induction condition whereby participants were instructed that they were to deliver a speech that would be evaluated, or to the control condition in which there was no speech task. Participants also completed two variants of the Beads Task: the classic version and a social variant focusing on the accumulation of social evaluative information to support decision making. The preliminary results suggested no significant differences in the number of beads sampled across experimental conditions. However, there were significant differences across experimental conditions in terms of how participants assigned evidence weights to the information sampled. Participants in the anxiety condition exhibited a more cautious and slower belief updating pattern by allocating significantly heavier weights to less frequently occurring information compared to those in the control condition. This pattern of results was observed in both classic and social Beads Tasks. Importantly, the preliminary findings imply that anxiety promotes cautiousness in belief revision instead of jumping-to-conclusion bias per se.
This is an in-person presentation on July 20, 2023 (16:20 ~ 16:40 UTC).
Barbara Kreis
Dr. Hartmut Blank
Thorsten Pachur
When people estimate the quantities of objects (e.g., country populations), are then presented with the objects’ actual quantities, and subsequently asked to remember their initial estimates, responses are often distorted towards the actual quantities. This hindsight bias—traditionally considered to reflect a cognitive error—has more recently been proposed to result from adaptive knowledge updating. But how to conceptualize such knowledge-updating processes and their potentially beneficial consequences? Here we provide a methodological and analytical framework that conceptualizes knowledge updating in the context of hindsight bias in real-world estimation by formally connecting it with research on seeding effects—improvements in people's estimation accuracy after exposure to numerical facts. This integrative perspective highlights a previously neglected facet of knowledge updating, namely, recalibration of metric domain knowledge, which can be expected to lead to transfer learning and thus improve estimation for objects from a domain more generally. We develop an experimental paradigm to investigate the association of hindsight bias with improved estimation accuracy. This paradigm allows for the joint measurement of both phenomena with the same formal approach. In Experiment 1, we demonstrate that the classical approach to triggering hindsight bias indeed produces transfer learning. In Experiment 2, we provide evidence for the novel prediction that hindsight bias can be triggered via transfer learning; this establishes a direct link from knowledge updating to hindsight bias. Our work integrates two prominent but previously unconnected research programs on the effects of knowledge updating in real-world estimation and supports the notion that hindsight bias is driven by adaptive learning processes.
This is an in-person presentation on July 20, 2023 (16:40 ~ 17:00 UTC).
Submitting author
Author