Teaming and Group Modeling
You must be logged in and registered to see live session information.
By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Artificial intelligence (AI) systems use past states of the environment to predict future states of the environment. However, when people make predictions, they also make inferences about the hidden states of other entities, known as Theory of Mind. Accurate inferences about hidden states, such as goals, strategies, and traits, are critical in many situations. For example, a person can identify an aggressive driver changing lanes to make a highway exit and leave extra space in response. AI systems equipped with cognitive models that characterize hidden states from observed behavior may allow for more human-like predictions. To test this, we applied a model of approach-avoidance dynamics to a continuous control task. In this task, participants competed with a computer opponent to achieve five different goals by moving a joystick to control a spaceship in a 2-D environment. On each trial, their goal may require them to collide with the other ship, avoid the other ship, stay close to the other ship, herd the ship to a location, or keep the ship away from a location. Neural networks were trained to predict the goal of each trial from raw behavioral data (i.e., the position of each ship) or from parameters estimated from a model of approach-avoidance gradients. Both networks predicted the participants’ goals well above chance, with overall accuracy outperforming human inference. Further work is necessary to test whether a network trained on both behavioral data and model parameters improves predictions.
Despite the overwhelming scientific consensus that human activities contribute significantly to climate change, public opinion remains divided. To bridge this gap, informative messaging about the consensus has been widely proposed as a persuasive tool. However, it remains challenging to understand how people interpret this information and how it interacts with their wider belief system. Using survey experiments that vary the level of scientific consensus, we find that consensus information not only influences climate change beliefs but also shapes perceptions of climate scientists themselves, consistent with normative principles of Bayesian belief updating. The data indicate that perceptions of pro-consensus scientist skill are especially linked to climate beliefs and more malleable among the most skeptical subgroup, raising an important avenue for messaging. By unpacking the belief system underlying one of the most prominent climate communication strategies, our research provides a deeper understanding of public response to consensus messaging, offering guidance for developing more targeted and effective science communication.
In sampling approaches to advice taking, participants can sequentially sample multiple pieces of advice before making a final judgment. To contribute to the understanding of active advice seeking, we develop and compare different strategies for information integration from external sources, including Bayesian belief updating. In a reanalysis of empirical data, we find that participants most frequently compromise between their initial beliefs and the distributions of multiple pieces of advice sampled from others. Moreover, across all participants, compromising predicts their final beliefs better than choosing one of the two sources of information. However, participants’ willingness to integrate external opinions is relatively higher for multiple pieces of reasonably distant as compared to close advice. Nevertheless, egocentrism is as pronounced as in the traditional paradigm where only a single piece of external evidence is provided. Crucially, there are large inter- and intra-individual differences in strategy selection for sequential advice taking. On the one hand, some participants choose their own or others’ judgments more often, and other participants are better described as compromisers between internal and external sources of information. On the other hand, virtually all participants apply different advice taking strategies for different items and trials. Our findings constitute initial evidence of the adaptive utilization of multiple, sequentially sampled external opinions.
The illusory truth effect refers to the phenomenon that repetition increases the perceived truth of statements. Recently, Fazio et al. (2019) argued that this effect does not only occur for ambiguous statements but is equally strong for plausible and implausible statements. However, this conclusion is based on specific assumptions about the psychometric properties of observable truth judgments. These auxiliary assumptions remain implicit in the original, simulation-based approach. As a remedy, we propose a formal mathematical model that describes the link between latent feelings of truth and observable truth judgments. We show that fitting the model to data in a Bayesian framework provides a stronger test of the theory compared to merely testing specific features of the shape of empirical estimates.