Extending the basic local independence model for the assessment of (un)learning items in knowledge space theory
Probabilistic Knowledge Space Theory (PKST; Doignon & Falmagne, 1999) provides a set-theoretic framework for the assessment of a subject's mastery of items within a knowledge domain while accounting for response errors (i.e., careless errors and lucky guesses). For usage in longitudinal contexts, a skill-based extension of PKST has been suggested to incorporate two points of measurement (Anselmi et al., 2017; Stefanutti et al., 2011), where skills may be gained or lost from one point of measurement to the next, and the associated parameters for gaining and losing skills may vary between multiple groups. For some of these models, MATLAB code for maximum likelihood parameter estimation via the expectation-maximization algorithm (ML-EM) is available. Its known drawback of potentially inflating response error probabilities is dealt with by introducing (arbitrary) upper bounds for these parameters. In the present work, we develop models that extend the Basic Local Independence Model of PKST with parameters for gaining (or losing) item mastery between two points of measurement. We establish ML-EM parameter estimation and, in order to avoid parameter inflation, both a minimum-discrepancy (MD) method that minimizes response errors and a hybrid MDML method (Heller & Wickelmaier, 2013). All estimation methods are implemented in R. Results on parameter recovery and identifiability are presented.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: