By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
We present a method for mitigating the cold start problem in a computer-based adaptive fact learning system. Currently, learning sessions have a “cold start”: the learning system initially does not know the difficulty of the study material, resulting in a suboptimal start to learning.The fact learning system is based on a computational model of human memory and adaptively schedules the rehearsal of facts within a learning session. Facts are repeated whenever their activation drops below a threshold, ensuring that repetitions occur as far apart as possible, while still happening soon enough to encourage successful recall. Throughout the session, response times and accuracy are used to update fact-specific rate-of-forgetting estimates that determine each fact’s decay, and thereby its repetition schedule. When a learner first studies a set of items, the memory model uses default rate-of-forgetting estimates, leading to a suboptimal rehearsal schedule at the start of the session: easy facts are initially repeated too much, while difficult facts are repeated too infrequently.Here, we take a collaborative filtering approach to reducing the cold start problem. A Bayesian model, trained on rate-of-forgetting estimates obtained from previous learners, predicts the difficulty of each fact for a new learner. These predictions are then used as the memory model’s starting estimates in a new learning session.In a preregistered experiment (n = 197), we confirm that this method improves the scheduling of repetitions within a learning session, as shown by participants’ higher response accuracy during the session and better retention of the studied facts afterwards.
Mathematical models are frequently used to formalize and test theories of psychological processes. When there are multiple competing models, the scientific question becomes one of model selection: How to select the model that most likely represents the underlying data-generating process? One common method is to select the model that strikes the best compromise between goodness of fit and complexity. For example, by penalizing model fit with the number of parameters (e.g. AIC, BIC). The idea of such an approach is that a model that fits the data well but is not too complex likely generalizes well to new data. A more direct approach of evaluating a model’s ability to generalize to new data is using cross validation; each model is repeatedly fit to a subset of data and the results of that fit are used to predict the subset of the data that was not used for fitting.We compared both methods of model selection in the domain of visual-working memory. The theoretical debates in this domain are reflected in the components of its formal models: guessing processes, item limits, the stability of memory across trials, etc. We selected a number of common model variations and compared them using both AIC (which is commonly used in the field) and three types of cross validation. Our results suggest that both methods largely lead to the same theoretical inferences about the nature of memory. However, numerical issues commonly occur when fitting more complex model variants which complicates model selection and inference.