Numeric Cognition
Rebecca Albrecht
Prof. Bettina von Helversen
Jorg Rieskamp
Agnes Rosner
This work investigates the cognitive processes underlying quantitative judgments from multiple cues by combining cognitive modeling with eye tracking. People can judge an object’s criterion value based on the object’s similarity to previously experienced exemplars (similarity-based process) or by integrating across the object’s cues like a linear regression (rule-based process). We test whether people who rely more on the similarity to exemplars as indicated by cognitive modeling also look more at the locations on the screen where the exemplars were shown. To this end, we conducted two eye tracking studies, in which the cues predicted the criterion in an additive (N = 19) or multiplicative (N = 49) way. Participants first learned the criterion value and screen location of each of four exemplars; then, they judged the criterion value of new, briefly presented test stimuli without feedback. Eye tracking measured participants’ gaze proportions to the now blank exemplar locations (looking-at-nothing), and cognitive modeling using the RulEx-J framework modeled their reliance on a similarity- over a rule-based process. We found more similarity usage and more looking-at-nothing in the multiplicative study than in the additive study. Focusing on the multiplicative study, participants relying more on the similarity to exemplars also looked more at the blank exemplar locations (r = 0.36, p = .01). Furthermore, the looking-at-nothing was usually directed at a single exemplar that was similar to the test stimulus. These results show that combining model-based and process-tracing analyses can provide mutually supportive and complementary insights into the cognitive processes underlying quantitative judgments.
This is an in-person presentation on July 22, 2024 (11:40 ~ 12:00 CEST).
Dr. Alice Mason
Sebastian Olschewski
The ability to estimate the average value of a number stream is a fundamental aspect of information processing and a building block of value-based decisions. Yet, research on average estimation has focused on the integration of numerical information from a single source. Here, we examined the estimation of averages when competing sources of information are presented. We tested two theories of numeric value integration: the Compressed Mental Number Line (CMNL) predicts underestimation of averages independent of competing information; Selective Integration (SI) predicts that competing information interferes with the target information. Across three experiments, we found a significant underestimation of the averages, and a limited impact of competing information on estimation. Computational modeling shows that the CMNL (together with an explicit noise theory) provides the overall better account than SI to describe estimation behavior in our data. However, about one third of our participants were best described by SI. Among these participants, the computational mechanism of SI consisted of an underweighting of lower numbers in local sample comparisons. Overall, our findings clarify the role of competing information in average estimations, and shed light on the exact cognitive process and limitations of SI as a general theory of sequential information integration.
This is an in-person presentation on July 22, 2024 (12:00 ~ 12:20 CEST).
Pablo Leon Villagra
Johanna Falben
Nick Chater
Prof. Adam Sanborn
In random generation tasks, participants generate items such as numbers unpredictably. Recently, we have proposed that, when doing these tasks, people employ their general ability to generate samples for inference; an approach resembling Markov Chain Monte Carlo (MCMC) algorithms in computer science (Castillo et al., 2024). Consistent with this model, we have also found that people’s random samples approximate the ground-truth distribution well, which has led us to propose random generation as a technique to elicit people’s beliefs (León-Villagrá et al., 2022). Several manipulations have previously been found to affect how random people can be. Here we explore two such manipulations and connect changes in people’s behaviour to changes in model parameters. In two experiments, we ask participants to generate samples from two naturalistic domains (lifespans and heights) and manipulate either the speed of production or the requirement to produce the samples randomly (within-participants design). Consistent with previous research (Towse, 1998), we find that people are less random when the production speed is higher. We find that this difference is characterized quite well by the same MCMC algorithm reporting every second sample in the slower condition. Perhaps surprisingly we find little difference in people’s samples when items only need to be reported as they “come to mind”, with items being closer together but other typical deviations from randomness not changing. We use these results to characterize people’s sampling engines, finding which aspects of their sampling people can alter when the task changes, and which they cannot.
This is an in-person presentation on July 22, 2024 (12:20 ~ 12:40 CEST).
Submitting author
Author