Session 8: Friday 12 February, 10am-11am
IRT models require scoring functions, i.e., the assignment of scores to response categories or scale points. These can play a role in measuring response styles or biases, as well as conventional IRT measures such as expected scoring functions or item information functions. There are potentially useful connections between scoring functions and functions for measuring the distance between an empirical probability distribution and a hypothetical reference distribution. Every IRT scoring function can be interpreted as corresponding to a distance between the empirical pdf, f, and a referent pdf, g, by assigning a set of destination bins to match the scoring function maxima and then constructing g by minimizing distance between f and g via a distance metric. Distance functions for measuring the difference between g and f can be normed and also "inverted" to measure the similarity between f and a response style operationalized by g. These interconnected functions provide a novel way to measure response style or response bias.
Prof. Eric Beh
At the heart of analysing contingency tables is the testing of association between those two or more categorical (qualitative) variables. The most common statistical technique for analysing this association is Pearson’s chi-squared test of independence which identifies a statistically significant association exists between these variables. However, if such an association exists it does not reveal its nature. Doing so can be done by objectively finding scores that maximise this association and also determines those rows (or columns) of the table that contribute in similar (or different) ways to this association. One method for identifying these scores is reciprocal averaging which is a powerful statistical tool in a wide variety of applications in many disciplines, notably in the ecological, psychological, health and social sciences. This talk will provide a mathematical description of the classical approach to reciprocal averaging and discuss new insights and developments of the method.
Jeremy B. Caplan
Models of association memory make predictions about within-pair order (AB vs. BA), either implying that order recognition of a retrieved pair should be at chance or perfect. Behaviour contradicts both predictions, when the pair can be recalled, order judgment is above chance, but still fairly low. We test four extensions of convolution-based models, which otherwise predict chance order judgment performance, where pair order is encoded as: 1) positional item features. 2) position-specific permutations of item features. 3) position-item associations. 4) adding position vectors to items. All models achieved close fits to averaged order recognition data, without compromising associative symmetry. However, unlike all other models, model #3 could not account for individual differences in order recognition, without adopting extreme parameter values. In sum, simultaneously satisfying benchmark characteristics of association and order memory provides challenging constraints for existing models of association.
Jon-Paul Cavallaro
Mr. Gavin Cooper
Caroline Kuhne
Guy Hawkins
Dr. Scott Brown
Bayesian Hierarchical modelling techniques are widely used in mathematical psychology, however, many existing methods of estimation are restricted to extensions of previous methods. Following a paper by Gunawan et al (2020, JMP), we present a new R package for a novel sampling methodology - Particle Metropolis within Gibbs (PMwG). This method of particle Markov chain Monte-Carlo provides a more efficient and reliable method of hierarchical model estimation. The R package provides simple functionality, allowing models to be built from the ground up by the user, and is easily parallelisable. Further, the method allows the full parameter covariance matrix to be estimated, which is highly useful in joint-modelling applications. Here, we introduce the PMwG methodology, provide a short tutorial for the ready to use R package and highlight several extensions of the method from the original paper.
Dr. Saskia Bollmann
Prof. Markus Barth
Prof. Ross Cunnington
Dr. Alexander Puckett
Ashley York
Functional MRI (fMRI) at ultra-high field (7T) is more frequently being used to resolve fine detail on and across the cortex. Whereas most layer/laminar fMRI studies probe the degree of activation at varying depths by averaging all responses at a certain depth, far less work has been conducted to assess the consistency of the spatial distribution of those responses across depth. To address this, we performed a depth-dependent analysis of bottom-up- and top-down-driven somatotopic digit maps in human S1. Maps were generated via phase encoded vibrotactile stimulation (sensory condition; bottom-up maps), or by sweeping attention across fingertips using the Attentional Drift Design (attention condition; top-down maps). High resolution anatomical (MP2RAGE; 0.5mm) and functional BOLD (3D-EPI; 0.8mm) imaging data were acquired using Siemens’ Magnetom 7T scanner. We segmented the anatomical data to generate a family of surfaces at specific cortical depths, interpolated functional data onto them, and applied within-layer smoothing – before finally running delay analyses for both experimental conditions. Our findings in S1 are in line with previous work in visual cortex examining how gradient echo EPI responses vary across depth with the strongest and most spatially spread responses being found most superficially. Notably, smoothing data tangential to the surface was effective in reducing variance in the maps, particularly for the deepest cortical depths, while maintaining each digit representation.
Raine Vickers-Jones
Dr. David Sewell
Dr. Timothy Ballard
Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on previous findings. Yet attempts to build on findings that are not replicable could mean a great deal of time, effort, and money wasted. In light of the recent “crisis of confidence” in psychological science, the ability to accurately judge the replicability of findings may be more important than ever. In this Registered Report, we examine the factors that influence psychological scientists’ confidence in the replicability of findings. We recruited corresponding authors of articles published in psychology journals between 2014 and 2018 to complete a brief survey in which they were asked to consider 76 specific study attributes that might bear upon the replicability of a finding (e.g., preregistration, sample size, statistical methods). Participants were asked to rate the extent to which information regarding each attribute increased or decreased their confidence in the finding being replicated. We examined the extent to which each research attribute influenced average confidence in replicability, whether there were distinct underlying factors that influenced these judgments, and whether there were individual differences in the issues that participants considered. The conclusions reveal how certain research practices affect other researchers’ perceptions of robustness. We hope our findings will help encourage the use of practices that promote replicability, and by extension, the cumulative progress of psychological science.
Submitting author
Author