Psychology endeavors to develop theories of human capacities and behaviors based on a variety of methodologies and dependent measures. We argue that one of the most divisive factors in our field is whether researchers choose to employ computational modeling of theories (over and above data) during the scientific inference process. Modeling is undervalued, yet holds promise for advancing psychological science. The inherent demands of computational modeling guide us towards better science by forcing us to conceptually analyze, specify, and formalise intuitions which otherwise remain unexamined — what we dub “open theory”. Constraining our inference process through modeling enables us to build explanatory and predictive theories. Herein, we present scientific inference in psychology as a path function, where each step shapes the next. Computational modeling can constrain these steps, thus advancing scientific inference over and above stewardship of experimental practice (e.g., preregistration). If psychology continues to eschew computational modeling, we predict more replicability “crises” and persistent failure at coherent theory-building. This is because without formal modelling we lack open and transparent theorising. We also explain how to formalise, specify, and implement a computational model, emphasizing that the advantages of modeling can be achieved by anyone with benefit to all.
Though formal implementations of theories can be useful, they are not necessarily so. So, while a good theory is one that lends itself to being implemented formally, a "good" model does not guarantee a good theory. The scope of what can be learned from a mathematical model is based on the extent to which that model is implied by its motivating theory, and, therefore, what implications the failure of a model has for that theory. We discuss common use of mathematics in psychology, such as model fitting and model selection, with these limitations in mind.
One response to the ongoing replication crisis in the social sciences has been the observation that solutions such as pre-registration become less necessary (or perhaps even unnecessary) when experiments and analyses are constrained by good theory. In this talk I will discuss what characteristics are necessary for a theory to be "good enough" to play this role effectively. I will argue that in all or almost all areas of psychology -- including mathematical psychology -- existing theories and computational models do not have these characteristics. I will conclude by discussing what can be learned from "not good enough" theories and will offer some thoughts on how to create better ones given the epistemological and practical problems inherent in doing science in the real world.
A common adage is ‘good theories make testable predictions’, which fits a view of science as progressing solely via the empirical cycle (i.e. the iterative process of revising theories by deriving and testing predictions). This view has led, however, to a relative neglect of good theory building prior to testing. Even in subfields strong in theory, such as mathematical psychology and computational cognitive science, theories are put to empirical tests without considering if they provide possible explanations for target phenomena. We may come away thinking our theories are empirically supported while the processes they postulate are in fact impossible. I use one species of impossibility -- intractability – as an illustration. I will distill some general lessons and conclude that the empirical cycle is best complemented by an equally important theoretical cycle (i.e. the iterative process of refining theories to postulate only possible processes).
Theories, by which I mean formal theories rendered in mathematical or computational formalisms, are commonly understood as describing the data on human cognition, or as implementing a computational (in the sense of Marr) specification of cognition. Beyond these functions, we also desire theories that explain the phenomena of cognition. Three scenarios where a new theory seems to explain cognition are: (1) the development of a new mechanism that captures empirical anomalies beyond the reach of existing theories; (2) the proposal of a new theory that provides a unified account of domains previously thought to be independent; and (3) the importation of a new formalism into cognitive science that supports theories that seem to effortlessly account for large swaths of cognition. In these scenarios, a new theory explains in part by offering a new perspective on cognition, one that directly captures phenomena without much need for tedious, low-level calculation. We illustrate these claims by considering the case of constraint satisfaction theories of cognition in some depth.