By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
To successfully navigate its social environment, an agent must construct and maintain representations of the other agents that it encounters. Such representations are useful for many tasks, but they are not without cost. As a result, agents must make decisions regarding how much information they choose to store about the other agents in their environment. Using choice prediction as an example task, we illustrate the problem of finding agent representations that optimally trade off between downstream utility and information cost, before presenting the results of two behavioural experiments designed to examine this tradeoff in human social cognition. We find that people are sensitive to the balance between representation cost and downstream value, while still deviating from optimality.
This is an in-person presentation on July 21, 2024 (11:40 ~ 12:00 CEST).
How do people reason about possibilities in everyday life? Most cognitive scientists, including readers of this article, are likely to believe that they rely on a logic, albeit one beyond the grasp of introspection. Logics exist for dealing with possibilities—modal logics, and they are useful in software engineering and other domains. This article describes the mental model theory of possibilities, and reports two experiments corroborating its central claim that individuals make inferences in default of knowledge to the contrary—a principle inconsistent with all standard modal logics. It also shows that the theory’s implementation in a computer program, mModal, accounts for differences from one individual to another in how they reason about possibilities.
This is an in-person presentation on July 21, 2024 (12:00 ~ 12:20 CEST).
A core cognitive ability of humans is the creation of and reasoning with mental models based on given information. When confronted with indeterminate information, allowing for the existence of multiple mental models, humans seem to recurrently report specific models - so-called preferred mental models. In this paper, we revisit this within the context of syllogistic reasoning, which involves statements about quantified assertions. We present an experiment designed to investigate the verification process of preferred mental models. Our analysis centers on two primary research questions: Is model verification generally straightforward for reasoners? And does a preference effect for specific models exist in syllogistic reasoning? Furthermore, employing modeling techniques, we delve into analyzing structural complexity of mental model, based on the types of instances they consist of. We discuss our findings and their implications on the differences between reasoning with syllogisms and spatial statements.
This is an in-person presentation on July 21, 2024 (12:20 ~ 12:40 CEST).