Censor Detection: When and how do people generalize from censored evidence?
When do people that data has been “censored” from an evidence sample and how do they respond? The present work examines 1) how people generalize from a smaller sample that may have been subject to censoring, to a larger sample, 2) compares inferences based on different sample distributions, and 3) inferences with and without a censoring prompt. Participants sampled on-line quality ratings of a novel restaurant that followed several different distributions (e.g., bimodal, left-skewed), summarized in a frequency distribution figure. They then constructed their own frequency distribution of a larger “population” of ratings and answered questions about the trustworthiness/believability of the initial sample. Participants were more likely to “fill in” missing data when the sample distribution observations were sparse (e.g., one-star ratings), or were inconsistent with priors about distribution shape. Human responses were compared with predictions of a computational model that reproduced the initial sample, a Bayesian model that assumed no censoring, a Bayesian “censoring” model, and a model that averages the empirical priors and initial observations. The averaging model performed best but did not capture responses in the sparse observation conditions. Results suggest people factor in both their prior distributional beliefs and observed sample data when generalizing from censored data.
Keywords
Topics
There is nothing here yet. Be the first to create a thread.
Cite this as:
Hayes, B. K.,