Session 9: Friday 12 February, 11am-12pm
Andrew Perfors
Evolution and learning are both processes that allow organisms to extract and store information about their environment. But how do the dynamics of these processes differ? In an abstract computational sense, both are optimization processes that search a space of possible explanations and previous work has identified deep parallels in the mathematical models used to describe them (Suchow, Bourgin, & Griffiths, 2017). We present the results of an iterated category learning experiment, where the number and placement of participants’ category boundaries are free to evolve over time. We contrast two evolutionary regimes: one where category systems are transmitted over multiple learners and one where they are developed within a single learner, for the same amount of time. We find that there are more constraints on the evolutionary process when systems are culturally transmitted among multiple learners. Single learners explore a wider range of category systems and converge on more complex systems, whereas transmission chains explore a more restricted set of systems and nearly always converge on a simple, but easily learnable, one-boundary category system.
Dr. Micah Goldwater
Prof. Dan Lovallo
Dr. Bruce Burns
People make all sorts of decisions based on quantitative forecasts. However, it is unclear how people use these kinds of metrics in the context of other, non-numerical, forms of information. Here we focus on resource allocation scenarios in large companies, wherein managers often have to allocate resources across very dissimilar projects. They use financial measures that simplify this difficult comparison because they aim to be equally applicable to any kind of project, but across domains, these measures vary in their reliability. Here we investigate the effects of project similarity, and forecast variance information. We found that participants accommodate their use of a financial forecast based on its reliability when allocating resources to a set of similar projects, but use reliability information less when allocating to a set of dissimilar projects. However, they only considered reliability when it was verbally communicated, not when it was expressed numerically. When expressed numerically, people made no use of the information about the variance in the forecasts. These findings show that the use of quantitative forecasts changes based on non-numerical information, despite the motivation of developing those metrics to apply across semantic contexts. In addition, people tend to ignore the variance information in their forecasts.
Andrew Perfors
Dr. Rachel Stephens
Reasoning beyond available data is a ubiquitous feature of human cognition. But while the availability of first-hand data typically diminishes as the concepts we reason about become more complex, our ability to draw inferences and reach conclusions seems not to. We may offset the sparsity of direct evidence by observing the statements and actions of others and inferring properties of evidence assumed to exist. But such social meta-inference comes with challenges of its own. For example, while we might infer the existence of evidence on the basis of a social consensus, its evidentiary strength is not immediately clear. Ideally it should be governed by the nature and extent of ground truth data from which the consensus was derived -- but this is the very thing that remains latent. Here, we present the results of an experiment aimed at examining people's perception of the evidentiary strength of social consensus in the context of social media posts. By systematically varying the degree of consensus along with the diversity of people and premises involved we are able to assess the contribution of each factor to evidentiary weight. Across a range of topics where reasoning from first-hand data is more or less difficult we find that while people were influenced by the number of people on each side of an argument, the number of posts was the dominant factor in determining how people updated their beliefs. However, in contrast to well established premise diversity effects, we find that people were largely insensitive to whether or not the posts represented distinct premises. We discuss the applied and theoretical implications of our findings.
Dr. Ian Stephen
It has been shown that observers’ perceptions of sociosexuality from strangers’ faces are predictive of individuals’ self-reported sociosexuality. However it is not clear what cues observers use to achieve this. Over two studies we examined whether sociosexuality is reflected in faces, which cues contain this information, and whether observers’ perceptions of sociosexuality from faces are predictive of individuals’ self-reported sociosexuality. In Study One, Geometric Morphometric Modelling (GMM) found that self-reported sociosexuality was predicted by facial morphology in male but not female faces. In Study Two, participants judged the sociosexuality of opposite sex faces at zero acquaintance. Perceived sociosexuality predicted self-reported sociosexuality for men, but not women. Participants also perceived composites of faces of high sociosexuality individuals as higher sociosexuality than composites of faces of low sociosexuality individuals for men’s but not women’s faces. GMM analyses also found that facial morphology statistically significantly predicted perceived sociosexuality in women’s and, to a greater extent, in men’s faces. Finally, facial shape mediated the relationship between perceived sociosexuality and self-reported sociosexuality in men’s but not women’s faces. Our results suggest that facial shape acts as a valid cue to sociosexuality in men’s but not women’s faces.
Mr. Chenyuan Zhang
Dr. Nir Lipovetzky
Understanding problem solving or planning has been a shared challenge for both AI and cognitive science since the birth of both fields. We explore the extent to which modern planning algorithms from the field of AI can account for human performance on the Tower of London (TOL) task, a close relative of the Tower of Hanoi problem that has been extensively studied by psychologists. We characterize the task using the Planning Domain Definition Language (PDDL) and evaluate a family of well-known planners, including an online planner, optimal planners and satisfying planners with different heuristics on the TOL task. We also introduce a novel methodology that compares planner performance with human behavior using the number of nodes generated by the planner during the search process. We find that none of the planners evaluated is able to capture all of the qualitative properties of human performance identified by previous behavioral work on the TOL task. Our results suggest that humans may rely on an approach that goes beyond standard AI planners by considering both local and global properties of the task.
Chris Donkin
Everyday speech is replete with expressions linking mental labour to economic concepts. We speak of being taxed by over-thinking, of paying attention to tasks, and investing effort. In short, we often think of thinking as costly. And yet, the clear association of effort with reward leads us, at times, to engage in activities precisely because they are effortful, and perhaps even to value effort itself. This relationship between the cost and value of effort has recently been described as a “paradox” and a “riddle”. In this talk, we will discuss novel theoretical, experimental, and computational approaches to determining the cost of thinking, in an attempt to answer the question of when and why thinking is costly, rewarding or both.
Submitting author
Author