Attention is often characterized as a means of uncertainty reduction, but the extent to which it is guided by structure in the environment remains unclear. Here, we investigate how the efficient use of limited attention can lead to a focus on information that applies commonly across members of a category. In three pre-registered experiments using mousetracking, we show that people change their patterns of attention as qualitatively predicted by rational principles. They preferentially sample category-level information when it is more variable, when time constraints are more severe, and when the category contains more members. However, their strategies fall quantitatively short of optimality, exhibiting a bias toward an even 1/N split over all information sources. We observe some signs of convergence toward the optimal strategy with experience, as well as curriculum learning effects. Our results help shed light on the ways in which rational principles account for attention allocation and provide novel evidence about the drivers of categorical thinking.
Prof. Adam Sanborn
Prof. Joseph Larry Austerweil
Much categorization behavior can be explained by family resemblance: new items are classified by comparison with previously learned exemplars. However, categorization behavior also shows dimensional biases, where the underlying space has so-called “separable” dimensions: ease of learning categories depends on how the stimuli align with the separable dimensions of the space. For example, if a set of objects of various sizes and colors can be accurately categorized using a single separable dimension (e.g., size), then category learning will be fast, while if the category is determined by both dimensions, learning will be slow. To capture these dimensional biases, models of categorization supplement family resemblance with either rule-based systems or selective attention to separable dimensions. But these models do not explain how separable dimensions initially arise; they are merely treated as unexplained psychological primitives. We develop, instead, a pure family resemblance version of the Rational Model of Categorization, which we term the Rational Exclusively Family RESemblance Hierarchy (REFRESH), which does not presuppose any separable dimensions. REFRESH infers how the stimuli are clustered, and uses a hierarchical prior to learn expectations about the location and variability of clusters across categories. We first demonstrate the dimensional-alignment of natural category features and then show how through a lifetime of categorization experience REFRESH will learn prior expectations that clusters of stimuli will align with separable dimensions. REFRESH captures the key dimensional biases, but also explains their stimulus-dependence and specific learning effects, properties that are difficult to explain with rule-based or selective attention models.
Many research questions involve determining whether two stimulus properties are represented “independently” or “invariantly” versus “configurally” or “holistically”. General recognition theory (GRT) provides formal definitions of such concepts and dissociates perceptual from decisional forms of independence. Two issues with GRT are (1) the arbitrariness of the dimensional space in which the model is defined, and (2) that it provides insight on whether dimensions interact, but not on how they interact. Here, we link GRT to the linear-nonlinear observer model underlying classification image techniques. This model is defined in a non-arbitrary stimulus space, and facilitates studying how sampling of information from that space (summarized in classification images) contributes to dimensional interactions. We define template separability as a form of independence at the level of the perceptual templates assumed by this model, and link it to perceptual separability from the traditional GRT. Their theoretical relations reveal that some violations of perceptual separability may be due to the stimuli used rather than a property of the observer model. Naturalistic stimuli, such as faces, readily produce patterns of interactivity in a GRT model even when there is no perceptual interaction in the underlying observer model. Stimulus factors can also account for reports of unexpected violations of separability found in the literature (e.g., between line orientation and size). In addition, perceptual separability can be observed even when there is no underlying template separability in the observer model. This means that invariance/separability learning may be the product of adaptive modification of non-invariant representations.
In this presentation we analyze three probabilistic models of classification behavior in terms of implicit assumptions involving contrast information. We show how, for different contrast sets, the models display inconsistencies from the standpoint of the empirical evidence. We conclude that their reliance on tacit assumptions for computing classical probabilities limit their explanatory capacity in as much as they cannot account for situations for which contrast cue information is not necessary. Furthermore, we provide empirical and theoretical evidence that such situations are more prominent in concept learning and classification behavior than previously realized.