Psychometrics
Giulia Pintea
An event in memory has many, qualitatively different, attributes. Besides the informational content, such as which word was presented on a list at a given time, there is also information about the temporal and ordinal properties of the event relative to other events. A MPT model is provided for measuring four different states for the memory of the order of any arbitrary event. The statistical properties of the model are described in this paper. In addition, the results from three experiments are provided. The experimental evidence shows that item knowledge is quickly acquired whereas the corresponding acquisition of ordinal information takes many more training trials.
Dr. Aline Bompas
Human performance is characterised by endogenous variability, which often shows dependency over time. However, studies on these temporal structures typically stay limited to showing that the structures exist in a particular data series. As such, their underlying mechanisms and informativeness for cognitive psychology are largely unknown. Two recent studies reported between-subject correlations between temporal structures and task performance on the same data series, but with contrasting results. In the current work, we investigated the intra-individual repeatability and inter-individual correlates of temporal structures in sensorimotor variability across the most commonly used measures – aiming to examine to what extent these structures are informative for studying individual differences. To capture endogenous sensorimotor variability, participants completed the Metronome Task – in which they press in synchrony with a tone. Occasionally, participants were presented with a thought probe, and were asked to rate their current subjective attentional state. Results indicate that autocorrelation at lag 1 and Power Spectra Density (PDS) slopes show good repeatability, while Detrended Fluctuation Analysis (DFA) slopes only show moderate repeatability, and ARFIMA(1,d,1) repeats poorly. Autocorrelation and PSD, and DFA to a lesser extent, correlated with task performance on the same data series – such that well-performing participants showed less dependency. However, temporal structures did not correlate with mean attentional state ratings nor with self-assessed ADHD tendencies, mind wandering, and impulsivity – negating assumptions that these structures arise due to fluctuations in internal meta-cognitive states. Overall, while temporal structure may be a reliable trait, its usefulness for studying individual differences is yet to be identified.
Miguel A. Vadillo
Zoltán Dienes
David R. Shanks
As a method to investigate the scope of unconscious mental processes, researchers frequently obtain concurrent measures of implicit task performance and explicit stimulus awareness across participants. Even though both measures might be significantly greater than zero, the correlation between them might not, encouraging the inference that an unconscious process drives task performance. We highlight the pitfalls of this null-correlation approach with reference to a recent study by Salvador, Berkovitch, Vinckier, Cohen, Naccache, Dehaene, and Gaillard (2018), who reported a non-significant correlation between the extent to which memory was suppressed by a Think/No-Think cue and an index of cue awareness. First, in the Null Hypothesis Significance Testing (NHST) framework, it is inappropriate to interpret failure to reject the null hypothesis (i.e., correlation = 0) as evidence for the null. Instead, a Bayesian approach is needed to compare the support of the data for the null versus the alternative (i.e., correlation > 0) hypothesis. Second, the often low reliabilities of the performance and awareness measures can attenuate the correlation, making a positive correlation appear to be zero. Hence, the correlation must be inferred in a way that disattenuates the weakening effect of measurement (trial) error. We apply two Bayesian models that account for measurement error to the Salvador et al. data. The results provide at best anecdotal support for the claimed unconscious nature of participants’ memory-suppression performance. Researchers are urged to employ Bayesian methods that account for measurement error to analyze correlational data involving measures of performance and awareness rather than NHST methods.
Hasan Uzun
Dr. Christopher Doble
Jeffrey Matayoshi
The ALEKS (Assessment and LEarning in Knowledge Spaces) educational software system is an instantiation of knowledge space theory (KST) that has been used by millions of students in mathematics, chemistry, statistics and accounting. The software employs a probabilistic assessment based on KST for placement into an appropriate course or curriculum, a learning mode in which students are guided through course material according to a knowledge structure, and regularly spaced re-assessments which are also based on KST. In each of these aspects, the interactions of the student with the system are guided by the theory and by insights learned from student data. We present several relationships between theory and data for the ALEKS system. We begin by surveying the ALEKS system and examining some practical aspects of implementing KST on a large scale. We then study the effectiveness of the ALEKS assessment using both standard statistical measures and ones adapted to the KST context. Finally, we examine the learning process in ALEKS via statistics for the learning mode and its associated knowledge structures.
Thorsten Pachur
Benjamin Scheibehenne
Computational modeling of cognition allows measurement of latent psychological variables by means of free model parameters. The estimation and interpretation of these variables is impaired, however, if parameters strongly correlate with each other. We suggest that strong parameter intercorrelations are especially likely to emerge in models that combine a subjective value function with a probabilistic choice rule—a common structure in the literature. We demonstrate high intercorrelation between parameters in the value function and the probabilistic choice rule across several prominent computational models, including models on risky choice (cumulative prospect theory), categorization (the generalized context model), and memory (the SIMPLE model of free recall). Based on simulation studies, we show that the presence of parameter intercorrelations hampers estimation accuracy, in particular the ability to detect group differences on the parameters and to detect associations of the parameters with external variables. We show that these problems can be alleviated by changing the models’ error component, such as assuming parameter stochasticity or a constant error term. Our analyses highlight a common but often neglected problem of computational modeling of cognition and point to ways in which the design and application of such models can be improved.
Submitting author
Author