Systems and Architectures
Paul M Garrett
Daniel R. Little
Dr. Ami Eidels
James T. Townsend
Systems Factorial Technology (SFT) is a popular framework for that has been used to investigate processing capacity across many psychological domains over the past 25+ years. To date, it had been assumed that no processing resources are used for sources in which no signal has been presented (i.e., in a location that can contain a signal but does not on a given trial). Hence, response times are purely driven by the "signal-containing'' location or locations. This assumption is critical to the underlying mathematics of the capacity coefficient measure of SFT. In this presentation, we show that stimulus locations influence response times even when they contain no signal, and that this influence has repercussions for the interpretation of processing capacity under the SFT framework, particularly in conjunctive (AND) tasks - where positive responses require detection of signals in multiple locations. We propose a modification to the AND task requiring participants to fully identify both target locations on all trials. This modification allows a new coefficient to be derived. We apply the new coefficient to novel experimental data and resolve a previously reported empirical paradox, where observed capacity was limited in an OR detection task but super capacity in an AND detection task. Hence, previously reported differences in processing capacity between OR and AND task designs are likely to have been spurious.
Dr. Ami Eidels
Bianca Belevski
Prof. Simon Dennis
Cues can be used to improve performance on memory recall tasks, and additional cues provide further benefit, presumably by narrowing the search space. Problems that require integration of two or more cues are referred to as memory intersections problems, or multiply constrained memory problems. The consideration of multiple cues in such problems can be done in parallel, when two (or more) cues are considered at the same time, or in serial, when one cue is considered after the other. The type of strategy, serial or parallel, is essential information for the development of theories of memory, yet evidence to date has been inconclusive. Using a novel application of the powerful Systems Factorial Technology (Townsend & Nozawa, 1995) we show participants use two cues in parallel in free recall tasks - a finding that contradicts two recent publications in this area. We then show that in a slightly modified variant of our method, constructed as a recognition task, most participants also use a parallel strategy but a reliable subset of participants used a serial strategy. Our findings provide important constrains for future theoretical development, point out strategy difference across recall- and recognition-based intersection memory tasks, and highlight the importance of tightly controlled methodological and analytic frameworks to overcome issues of serial/parallel model mimicry.
Matt Ross
Sylvain Chartier
One challenge for artificial neural networks is stabilizing on a desired response in a previously learned series of responses. This process is akin to going from a line to a point attractor. Since a single pattern can lead to multiple outcomes, the network faces a one-to-many problem. Using context, information given by the environment, is proposed in order to differentiate between the stimuli associated with themselves (point attractor) and the next in the series (line attractor). To test this with multi-step pattern time series, a Bidirectional Associative Memory (BAM) is used with alphanumeric stimuli as inputs. These stimuli are arranged in three different series of increasing difficulties where letters represent the stimuli and numbers represent the context: one long time series, two time series of different lengths, and three independent time series. Each of these time series has its own identifying numeric context. To determine which letter the BAM needs to converge on, the desired response in the specified context is compared with the output at each iteration during recall. When the desired response is reached, the context is changed, causing the network to switch attractors and therefore allowing the BAM to correctly stabilize on the desired output. This provides an effective solution to the one-to-many problem and allows the BAM to stabilize on the desired response, regardless of the length of the series or level of correlation between stimuli. This could represent how the most effective behaviour is selected from a series of behaviours to solve a given problem.
Mark Schurgin
John Wixted
Over the past decade, many studies have used mixture models to interpret continuous report memory data, drawing a distinction between the number of items represented the precision of those representations (e.g., Zhang & Luck,2008). Such models, and subsequent expansions of these models to account for additional phenomena like variable precision, have led to hundreds of influential claims about the nature of consciousness, working memory and long-term memory.Here we show that a simple generalization of signal detection theory (termed TCC – target confusability competition model - https://bradylab.ucsd.edu/tcc/) accurately accounts for memory error distributions in much more parsimonious terms, and can make novel predictions that are entirely inconsistent with mixture-based theories. For example, TCC shows that measuring how accurately people can make discriminations between extremely dissimilar items (study red; then report whether studied item is red vs. green) is completely sufficient to predict, with no free parameters, the entire distribution of errors that arises in a continuous report task (report what color you saw on a color wheel). Because this is inconsistent with claims that the continuous report distribution arises from multiple distinct parameters, like guessing, precision, and variable precision, TCC suggests such distinctions are illusory.Overall, with only a single free parameter – memory strength (d’) – TCC accurately accounts for data from n-AFC, change detection and continuous report, across a variety of working memory set sizes, encoding times and delays, as well as accounting for long-term memory continuous report tasks. Thus, TCC suggests a major revision of previous research on visual memory.
Prof. Joe Houpt
In this work we derive and illustrate a Bayesian time series model of the capacity coefficient to investigate processing efficiency across time. The workload capacity coefficient is a well-established measure from systems factorial technology that allows researchers to quantify a participant's multisource information processing efficiency. In most applications of the capacity coefficient, the analyses assume stationary performance across time. However, in many contexts participants' performance varies across time (e.g., vigilance decrements, training). This variation could be either due to changes in processing each source or the efficiency of combining the sources. A time-varying capacity measure would be valuable in determining the nature of the change over time, but dropping the stationarity assumption results in a severe loss in power. In an attempt to estimate a time-varying capacity coefficient, we developed a measure relying on Bayesian estimation. We used the Weibull distribution to approximately characterize the processing time of each source, with an inverse gamma distribution prior for the scale parameter and a known shape. This provided us a tractable way to update our prior estimate for real-time estimation of capacity. The prior was updated by weighting the observation's contribution to the likelihood by how recently they occurred. Samples from the posterior Weibull estimates were then combined using the appropriate capacity coefficient equation to obtain posterior distributions for the capacity coefficient. We demonstrate the approach with both simulated and human data. We believe the time-varying capacity coefficient will be a valuable tool for measuring cognitive tasks such as adaptive interface design.
Andrew Neal
Simon Farrell
Prof. Andrew Heathcote
We present a unified model of the spatial and temporal dynamics of motivation during goal pursuit. We use the model to integrate and compare six theoretical perspectives that make different predictions about how motivation changes as a person comes closer to achieving a goal, as a deadline looms, and as a function of whether the goal is being approached or avoided. We fit the model to data from three experiments that examine how these factors combine to produce changes in motivation over time. We show that motivation changes in a complex manner that cannot be accounted for by any one previous theoretical perspective, but that is well-characterized by our unified model. Our findings highlight the importance of theoretical integration when attempting to understand the factors driving motivation and decision making in the context of goal pursuit.
Bernd Westphal
Andreas Podelski
Psychological theories can be validated by comparing the predictions of an ACT-R model, which implements the psychological theory, to experimental data. For this approach, the model needs to be valid, i.e., the implementation of the model must not contain defects that skew the model’s predictions and may thus lead to acceptance of an incorrect psychological theory or vice versa. In recent work we presented formal analysis methods allowing for the exhaustive exploration of ACT-R models for defects. These methods rely on manual formalizations of ACT-R models and of the architecture, which determines possible model executions (e.g., by rule selection or buffer actions). Both formalization steps present threats to the validity of the analysis: Defects in an ACT-R model may remain undetected in case of errors. Our contribution addresses both formalization activities. We present a Timed Automata based framework for the operational formal description of cognitive architectures. The framework defines interfaces and invariants such that each implementation of the interface (by particular formalisations of modules) supports the execution of ACT-R model formalisations. We provide one exemplary implementation as a formalisation of ACT-R’s textual architecture description. Given the well-structured formalisation of the architecture, the formalisation of a model can be automatised. Overall, we obtain an automatic analysis of ACT-R models for defects on a formal model of the ACT-R architecture, as well as a strong perspective for the definition and analysis of other module implementations or whole other architectures.
Submitting author
Author