Prior predictive entropy as a measure of model complexity
In science, when we are faced with the problem of choosing between two different accounts of a phenomenon, we are told to choose the simplest one. However, it is not always clear what a “simple” model is. Model selection criteria (\emph{e.g.,} the BIC) typically define model complexity as a function of the number of parameters in a model, or some other function of the parameter space. Here we present an alternative based on the prior predictive distribution. We argue that we can measure the complexity of a model by the entropy of its predictions before looking at the data. This can lead to surprising findings that are not well explained by thinking of model complexity in terms of parameter spaces. In particular, we use a simple choice rule as an example to show that the predictions of a nested model can have a higher entropy in comparison to its more general counterpart. Finally, we show that the complexity in a model’s predictions is a function of the experimental design.
Keywords
Topics
I really enjoyed your talk; it’s made me think a lot about the relationship between model complexity and experimental design. One thing I’m still trying to wrap my head around is the effect of the stimulus space on PPD. Do you have any results or thoughts on whether a different specification of the stimulus space—different choice outcomes or probab...
Hi Manuel, Very interesting project! I was wondering if you have considered the case of an infinite number of repetitions. That means you'll need to measure entropy over the continuous choice probability rather than the discrete choice frequency. In experiments, we only collect a given number of repetitions, but many models make probabilistic pred...
Cite this as: