Memory
Dr. Laura Anderson
For over 70 years, recognition memory has been modelled using signal detection theory. An unsolved problem with this approach is that the shapes of the distributions of memory strength for studied and unstudied items are unknown. Although they are often assumed to be Gaussian, with different location and scale parameters, such models often fail to fit observed data. This has had the effect of sustaining the viability of alternative approaches such as discrete state models, mixture models, and hybrid dual process models. However, it is now possible to estimate the shapes of the proposed memory strength distributions using the monotonic linear regression algorithm developed by Dunn and Anderson (under review). We describe this algorithm, show how it can recover the relevant distribution shapes under the signal detection model, and show that it fails to do so under alternative models. We apply it to data from three item recognition experiments. Each experiment used the same set of stimuli and varied the number of study presentations (1, 2, or 4) and the nature of the study item or the study task: visual vs. auditory presentation (Experiment 1), read vs. generate task (Experiment 2), and focused vs. divided attention task (Experiment 3). While the results confirm the predictions of the signal detection model, the recovered distributions deviate from the Gaussian. Furthermore, we show that the regression weight associated with each condition can be interpreted as a measure of memory strength for that condition, replacing traditional indices such as d-prime.
This is an in-person presentation on July 21, 2023 (15:20 ~ 15:40 UTC).
David Kellen
Dr. Henrik Singmann
Prof. Christoph Klauer
When modeling recognition-memory judgments, it is typically assumed that requesting participants to judge whether a given test item is ‘old’ is mnemonically equivalent to asking whether that very same item is ‘new’—an assumption denoted as target-probe invariance. Contrary to this notion, results we recently obtained by means of a detection-plus-identification tasks (Meyer-Grant & Klauer, 2023, Memory & Cognition) seem to suggest that the mnemonic information available to a decision maker in fact changes depending on the status of the target being probed (i.e., that target-probe invariance is actually violated). For example, one of the key observations was an impairment of identification performance when new items instead of old items were defined as targets to be detected and identified. As a side-effect of this finding, an important earlier test of receiver operating characteristic (ROC) asymmetry may be questioned inasmuch as a violation of target-probe invariance provides an alternative interpretation of effects observed with this test. Interestingly, however, assuming a contamination of identification responses with occasional guessing in trials where no target is detected allows one to account for the observed difference in identification performance while retaining the target-probe invariance assumption. To enable a more conclusive resolution of this issue, we conducted further analyses of our previously published data and a new experiment that comprised no target-absent trials. Overall, the results indicate that identification responses in our original study may indeed have been contaminated by occasional guessing, thus rehabilitating the target-probe invariance assumption as well as the previous test of ROC asymmetry. This highlights the importance of carefully considering the experiment model in addition to the theoretical model when conducting critical tests that are motivated by mathematical models of cognitive processes.
This is an in-person presentation on July 21, 2023 (15:40 ~ 16:00 UTC).
Kenneth A. Norman
Thomas L. Griffiths
Qiong Zhang
We often use cues from our environment when we get stuck searching our memories, but prior research in memory search has not observed a facilitative effect when providing cues after recall ended. What accounts for this discrepancy? We propose that the content of the cues critically determines their effectiveness and sought to select the right cues by building a computational model of how memory search is affected by cue presentation (in a process we refer to as cued memory search). We hypothesize that cued memory search consists of (1) a basic memory search process, identical to memory search without external cues as captured by the existing Context Maintenance and Retrieval model (CMR), and (2) an additional process in which a cue's context influences one’s internal mental context. Formulated this way, our model (with parameters pre-determined from a group of participants) was able to predict in real-time (over a new group of participants) which cues would improve memory search performance. Participants (N = 195 young adults) recalled significantly more items on trials where our model's best (vs. worst) cue was presented. Our formal model of cued memory search provides an account of why some cues are better at aiding recall: Effective cues are those most similar to the remaining items, as they facilitate recall by tapping into and reactivating an unsearched area of memory. We discuss our contributions in relation to prominent theories about the effect of external cues.
This is an in-person presentation on July 21, 2023 (16:00 ~ 16:20 UTC).
Asli Kilic
The recognition memory models explain the processes of representation, encoding, and retrieval of items, and make performance predictions. These models are mostly based on the basic stages of familiarity calculation for a probe and a recognition decision based on a threshold value for endorsement of the probe. However, the course of decision making during recognition has been widely ignored in the recognition modeling literature. The research has mostly focused on explaining accuracy data but ignored the response time (RT) findings until the advent of dynamic recognition memory models (e.g. Cox & Shiffrin, 2012, 2017; Diller et al., 2001; Malmberg, 2008; Hockley & Murdock, 1987; Osth, Jansson, Dennis, & Heathcote, 2018). In recent years, dynamic recognition modeling research achieved promising results to account for the major findings on RT data. In the current study, we have been developing a novel dynamic version of Retrieving Effectively from Memory (REM, Shiffrin & Steyvers, 1997), which is one of the major recognition models. The model, called Retrieving Dynamically and Effectively from Memory (D-REM), incorporates the representation, encoding and likelihood calculation mechanisms of REM while including a dynamic decision making process based on sequential sampling. D-REM assumes that items are represented as vectors of item features. According to REM, encoding is a stochastic process with errors. Retrieval is made by comparisons between the test item and the memory traces, and the recognition decision is made by the likelihood calculation based on these comparisons. During retrieval, the features of the memory traces gradually enter into the buffer system in which the likelihood calculations are made. Thus, the evidence as to whether the probe is old or new accumulates in time towards the decision boundaries. The accumulation of evidence continues until it reaches one of the “yes” or “no” decision boundaries. The memory is updated according to the recognition decision. With this mechanism, D-REM proposes a novel account for the course of decision making during recognition. Including a time-varying boundary mechanism and a starting point parameter, the model aims to be the most extensive dynamic model in the REM framework. Examination of alternative variants of the model with differing drift rate and boundary mechanisms will provide further evidence on the time-course of evidence accumulation and response caution during a decision. We will present the simulations for standard yes-no recognition task and recognition with response deadline procedure via the preliminary variants of D-REM model. The model will be revised and improved according to the comparisons between alternative variants.
This is an in-person presentation on July 21, 2023 (16:20 ~ 16:40 UTC).
Prof. Pernille Hemmer
Michel Regenwetter
Daniel Cavagnaro
Hypotheses in free recall experiments often predict a greater average recall for one type of stimuli compared to another type. A frequent assumption – often implicit in statistical tests of these hypotheses – is that item recall is normally distributed. However, this assumption can be problematic in the domain of memory. Additionally, common statistical testing methods for testing theories can be blunt instruments. Researchers may be interested in more nuanced hypotheses that are cumbersome to test with traditional methods. For example, ideal theories might even make granular predictions about the memorability of each studied item, including that certain individual items are equally memorable. Here, we propose order-constrained models for recall data as a fruitful method of analysis that allows researchers to formulate, and test, nuanced and fine-grained hypotheses about recall. We illustrate the benefits of order-constrained modeling by re-analyzing data from a pre-registered experiment on the memorability of supernatural, bizarre, and natural concepts. We formulate and test a series of plausible and nuanced hypotheses. Order-constrained inference reveals differences in evidential support between different possible mathematical formulations of a single verbal theory.
This is an in-person presentation on July 21, 2023 (16:40 ~ 17:00 UTC).
Submitting author
Author