Parameter agreement and sources of disagreement across the Bayesian and frequentist MPT multiverse
Cognitive modelling results should be robust across reasonable data-analysis decisions. For parameter estimation, two essential decisions concern the aggregation of data (e.g., complete pooling or partial pooling) and the statistical framework (frequentist or Bayesian). The combination of these decision options spans a multiverse of estimation methods. We analysed a) the magnitude and b) possible sources of divergence between different parameter estimation methods for nine popular multinomial processing tree (MPT) models (e.g., source monitoring, implicit attitudes, hindsight bias). We synthesized data from 13,956 participants (from 142 published studies), and examined divergence in core model parameters between nine estimation methods that adopt different levels of pooling within different statistical frameworks. Divergence was partly explained by uncertainty in parameter estimation (larger standard error = larger divergence), the value of the parameter estimate (parameter estimate bear the boundary = larger divergence), and structural dependencies between parameters (larger maximal parameter trade-off = larger divergence). Notably, divergence was not explained by participant heterogeneity - a result that is unexpected given the previous emphasis on heterogeneity when choosing particular estimation methods over others. Instead, our synthesis suggests that other, idiosyncratic aspects of the MPT models also play a role. To increase transparency of MPT modelling results, we propose to adopt a multiverse approach.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: