Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Improving machine learning model calibration using probabilistic labels obtained via wisdom of the crowd

Authors
Mr. Gunnar Epping
Indiana University Bloomington ~ Cognitive Science and Psychology
Jennifer Trueblood
Indiana University Bloomington ~ Department of Psychological and Brain Sciences
Prof. Bill Holmes
Indiana University ~ Cognitive Science
Daniel Martin
Andrew Caplin
Abstract

An accurately labeled dataset is required to train a neural network on a classification task successfully. These labels are typically deterministic, corresponding to some ground truth. During training, a neural network learns an input-output mapping that maximizes the probability of the ground truth label for each stimulus. But what about tasks where ground truth is difficult to obtain? We introduce the use of incentive-compatible belief elicitation for labeling data and training machine learning models. Extending the work of Hasan et al. (2023), we harness the wisdom of the crowd through elicited beliefs, and then evaluate these methods in an experiment in which participants stated their belief that a white blood cell was cancerous for a series of cell images. We then trained different neural networks to classify the white blood cell images, where some networks were trained using deterministically labeled images and others were trained using the probabilistically labeled dataset obtained through elicited beliefs, and compared classification accuracy and calibration across the networks.

Tags

Keywords

classification
decision-making
probability judgments
calibration
wisdom of the crowd
machine learning
Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Epping, G. P., Trueblood, J., Holmes, W., Martin, D., & Caplin, A. (2023, July). Improving machine learning model calibration using probabilistic labels obtained via wisdom of the crowd. Abstract published at MathPsych/ICCM/EMPG 2023. Via mathpsych.org/presentation/1104.