Categorization and Memory
Dr. Adam Osth
Modern work in cognitive science suggests that some real-world images are more memorable than others and a variety of deep-learning networks can predict the extent to which individual items are memorable. However, this work tends to ignore the enormous role of context in influencing memorability. In the present research we conduct old-new recognition-memory experiments involving high-dimensional objects from real-world category domains. Among the variables that are manipulated are the size of categories that compose the to-be-remembered study lists, the degree of similarity of objects within each of the categories, and the extent to which individual objects possess distinctive features that make them stand out from other members of their categories. We conduct extensive similarity-scaling work to embed the objects in high-dimensional feature spaces and also collect ratings of individual-object distinctiveness. Using the high-dimensional feature space and the distinctiveness ratings as inputs, we show that an exemplar-based global-familiarity model that makes allowance for different degrees of “self-match” among objects accounts in quantitative detail for numerous aspects of individual-item old-new recognition performance. These include findings that false-alarm rates increase dramatically with increases in category size and with increases in within-category similarity, whereas hit rates vary primarily with the extent to which objects possess distinctive features. The model does a reasonably good job of quantitatively predicting false alarm rates associated with individual items across different contexts. Although we believe we are on the right track for predicting individual differences in old-item memorability, providing detailed quantitative accounts of individual-item hit rates remains a challenge.
Asli Kilic
The recognition memory models explain the processes of representation, encoding, and retrieval of items, and make performance predictions. These models are mostly based on the basic stages of familiarity calculation for a probe and a recognition decision based on a threshold value for endorsement of the probe. However, the course of decision making during recognition has been widely ignored in the recognition modeling literature. The research has mostly focused on explaining accuracy data but ignored the response time (RT) findings until the advent of dynamic recognition memory models (e.g. Cox & Shiffrin, 2012, 2017; Diller et al., 2001; Malmberg, 2008; Hockley & Murdock, 1987; Osth, Jansson, Dennis, & Heathcote, 2018). In recent years, dynamic recognition modeling research achieved promising results to account for the major findings on RT data. In the current study, we have been developing a novel dynamic version of Retrieving Effectively from Memory (REM, Shiffrin & Steyvers, 1997), which is one of the major recognition models. The model, called Retrieving Dynamically and Effectively from Memory (D-REM), incorporates the representation, encoding and likelihood calculation mechanisms of REM while including a dynamic decision making process based on sequential sampling. D-REM assumes that items are represented as vectors of item features. According to REM, encoding is a stochastic process with errors. Retrieval is made by comparisons between the test item and the memory traces, and the recognition decision is made by the likelihood calculation based on these comparisons. During retrieval, the features of the memory traces gradually enter into the buffer system in which the likelihood calculations are made. Thus, the evidence as to whether the probe is old or new accumulates in time towards the decision boundaries. The accumulation of evidence continues until it reaches one of the “yes” or “no” decision boundaries. The memory is updated according to the recognition decision. With this mechanism, D-REM proposes a novel account for the course of decision making during recognition. Including a time-varying boundary mechanism and a starting point parameter, the model aims to be the most extensive dynamic model in the REM framework. Examination of alternative variants of the model with differing drift rate and boundary mechanisms will provide further evidence on the time-course of evidence accumulation and response caution during a decision. We will present the simulations for standard yes-no recognition task and recognition with response deadline procedure via the preliminary variants of D-REM model. The model will be revised and improved according to the comparisons between alternative variants.
Representations of exemplars (e.g., apple) within a semantic category (e.g., fruit) are graded, such that certain members are perceived as being more representative (or “typical”) than others. Researchers have traditionally examined normative trends in category typicality by reporting the relative frequencies by which respondents report relevant exemplars when prompted with a category label (Barsalou, 1985). In many cases, however, the methods of computing normative typicality estimates do not match the distributional characteristics of the aggregated responses; for example, researchers often report arithmetic means on response distributions that are highly skewed. Here, we propose the use of rank-ordered probit models (Liddell & Kruschke, 2021) for estimating the normative typicality of exemplars ranked within common semantic categories using responses from a large-scale survey. These models estimate the probabilities of ordinal rankings using beta distributions with freely-varying parameters, which we estimate using approximate Bayesian computation. The probability densities from the estimated distributions are then used to estimate exemplar representativeness within categories. We show that a) the model accurately recovers normative trends in the observed data and b) likelihoods estimated from the resulting distributions are useful for computing normative typicality estimates of category exemplars.
Andrew Cohen
The Generalized Context Model (GCM) classifies items based on similarity to category exemplars. The model can include category-specific biases. Without these bias parameters, the GCM satisfies the independence of irrelevant alternatives (IIA) principle from the decision-making literature, in which the relative preference of two options does not change upon the introduction of a third option. In two experiments, participants learned to classify items into three categories. Across participants, two of the categories were fixed, but the third varied. The results show a violation of IIA in a categorization context. That is, the location of the third category shifted relative preference for the fixed categories. The GCM qualitatively accounts for the data only when category biases are allowed to vary across conditions. A rule-based categorization model with a stochastic criterion did not fit the data as well. In two subsequent experiments, we show that participants did not violate IIA when learned categories were fixed, but category choice sets were varied on a trial-by-trial basis.
Dr. Greg Cox
While many real-life events are complex and temporally extended, most memory research employs discrete, static stimuli. We begin to bridge this gap by developing a set of novel auditory stimuli constructed by adjusting the distribution of power across upper frequency bands. Across three studies, participants rated similarity between pairs of these sounds and engaged in a recognition memory task. We applied non-metric multidimensional scaling to similarity ratings to obtain a three-dimensional psychological representation of the stimuli. The first dimension appeared to correspond to timbral roughness and the second to timbral brightness, while the third did not admit a simple verbal label. There were also individual differences in the degree to which participants attended to each of these dimensions, potentially as a function of musical expertise, as well as encoding strategy, and personality variables such as conscientiousness. The representation inferred from similarity ratings predicted recognition memory performance for single probe sounds following sequential presentation of two sounds, consistent with similarity-based exemplar models of memory. Recognition false alarms increased with subjective similarity between the probe and the first memory item but not the second, suggesting that the most recent sound was represented in a form that is less susceptible to incidental similarity. We also observed a list homogeneity effect: hits and false alarms decreased with similarity between studied sounds. We build on these results to discuss implications for the development of an integrated theory of perceptual similarity and recognition memory in the auditory domain using a novel computational model that extends on elements from the exemplar-based random walk (EBRW; Nosofsky & Palmeri, 1997b) model. Model fits to behavioral data from the similarity rating and recognition tasks provide preliminary evidence for this theory.
Submitting author
Author