Integrating word-form representations with global similarity computation in recognition memory
It has been well established in recognition memory paradigms that participants exhibit higher probabilities of falsely endorsing lures that are perceptually similar to the studied words. Recognition memory models explain this phenomenon as a consequence of global similarity computation - choice probability is proportional to the aggregated similarity between the probe word and each of the study list words. However, to date such models have not integrated perceptual representations of the words themselves. In this work, I explore the consequences of a variety of word-form representations from the psycholinguistics of reading literature. These include representations where similarity is a function of the number of in-position letter matches (slot codes and both edges representation), representations with noisy position codes (the overlap model; Gomez, Ratcliff, & Perea, 2008), along with matches based on relative position matches (bigram models). Global similarity among the representations was linked to choice and response times using the linear ballistic accumulator model (Brown & Heathcote, 2008). Results demonstrated a.) a general superiority of bigram models, b.) changes in perceptual representations under shallow processing, and c.) comparable interference from perceptual similarity as semantic similarity, where semantic similarity was calculated using Word2Vec representations (Mikolov et al., 2013).
Keywords
Topics
Thank you for sharing this interesting work. I have a couple questions/comments: 1) In retrospect, it seems obvious that orthographic similarity should be part of a model of memory -- any thoughts on why this hasn't been done so much up until now? 2) Is there a reason you don't use levenshtein distance as a measure of similarity between probe...
Cite this as: