By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Progress in science depends on well-designed experiments, yet when it comes to testing computational models, good design can be elusive because model similarities and differences are difficult to assess. Further, observations can be expensive and time-consuming to acquire (e.g., fMRI scans, children, clinical populations). There has been a growing interest by researchers in the design of adaptive experiments that lead to rapid accumulation of information about the phenomenon under study with the fewest possible measurements. In addressing this challenge, statisticians have developed optimal experimental design (OED) methods that combine the power of statistical computing techniques with the predictive precision of the formal models, yielding experiments that are highly efficient and maximally informative with respect to a given experimental objective. This presentation provides an overview of OED with an emphasis on recent developments and applications in the behavioral sciences.
The idea that good experiments maximize information gain has a long history (e.g. Lindley, 1956). A key obstacle to using this insight to automate experiment design has been the difficulty of estimating information gain. In the last few years new methods have become available for casting information estimation as an optimization problem. Using these ideas, we describe variational bounds on expected information gain. Deep neural networks and gradient-based optimization then allow efficient optimization of these bounds even for complex data models. We apply these ideas to experiment design, adaptive surveys, and computer adaptive testing.
Cognitive neuroscientists are often interested in broad research questions, yet use overly narrow experimental designs by considering only a small subset of possible experimental conditions. This limits the generalizability and reproducibility of many research findings. In this talk, I present an alternative approach that resolves these problems by combining real-time functional magnetic resonance imaging (fMRI) with a branch of machine learning, Bayesian optimization. Neuroadaptive Bayesian optimization is a non-parametric active sampling approach using Gaussian process regression. The approach allows to intelligently search through large experiment spaces with the aim to optimize a human subject’s brain response. It thus provides a powerful strategy to efficiently explore many more experimental conditions than is currently possible with standard neuroimaging methodology. In this talk, I will present results from three different studies where we applied the method to: (1) better understand the functional role of frontoparietal networks in healthy individuals, (2) map cognitive dysfunction in aphasic stroke patients, and (3) tailor non-invasive brain stimulation parameters to a particular research question. I will conclude my talk in discussing how Bayesian optimization can be combined with study preregistration to cover exploration, mitigating researcher bias more broadly and improving reproducibility.
Understanding associative learning - the ability to acquire knowledge about contingencies between stimuli, responses, and outcomes - is crucial in explaining how animals adapt to their environments. Moreover, the theory of associative learning also provides a rationale for clinical treatments, such as exposure therapy for phobias. The study of associative learning has been significantly advanced through reliance on formal psychological modeling. However, the rich history of modeling, and the resulting abundance of models, lead to challenges in designing informative experiments. With a growing space of increasingly flexible candidate models, it is difficult to manually design experiments which efficiently discriminate between them. Here we propose to address this challenge through formal optimization of experimental designs. We first consider the structure of classical conditioning experimental designs and propose low-dimensional formalizations amenable to optimization. Next, we combine simulation-based evaluation of design utility with Bayesian optimization to efficiently search the experiment space for utility-maximizing designs. Lastly, we describe several simulated scenarios which show that optimized designs can substantially outperform canonical manual designs, whether the goal is model comparison or parameter estimation. Based on these results, we sketch out possible future avenues for optimal experimental design in associative learning, and cognitive science more broadly.
While animal training is an essential part of modern psychology experiments, the training protocol is usually hand-designed, relying heavily on trainer intuition and guesswork. I will present a general framework that takes animal training to a quantitative problem, building on ideas from reinforcement learning and optimal experimental design. Our work addresses two interesting problems at once: First, we develop an efficient method to characterize an animal's behavioral dynamics during learning, and infer the learning rules underlying its behavioral changes. Second, we formulate a theory for optimal training, which involves selecting sequences of stimuli that will drive the animal’s internal policy toward a desired state in the parameter space according to the inferred learning rules.
Machine learning has the potential to facilitate the development of computational methods that improve the measurement of cognitive and mental functioning, and adaptive design optimization (ADO) is a promising machine-learning method that might lead to rapid, precise, and reliable markers of individual differences. In this talk, we will present a series of studies that utilized ADO in the area of decision-making and for the development of ADO-based digital phenotypes for addictive behaviors. Lastly, we will introduce an open-source Python package, ADOpy, which we developed to increase the accessibility of ADO to even researchers who have limited background in Bayesian statistics or cognitive modeling.
The contrast sensitivity function (CSF), which describes visual sensitivity (1/contrast threshold) to narrow-band stimuli of different spatial frequencies, provides a comprehensive measure of the visual system over a wide range of spatial frequencies in both normal and abnormal vision. The CSF is closely related to daily visual functions and has proved important in characterizing functional deficits in many visual disorders. More importantly, assessment of CSF may reveal the “hidden vision loss”, that is, even when acuity appears normal, patients may have evident CSF deficits. I will discuss three lines of research: (1) Modeling: Using the external noise paradigm and the perceptual template model (PTM), we characterized the CSF in terms of the gain profile, nonlinearity, additive noise, and multiplicative noise of the perceptual system. (2) Efficient Assessment: Despite the importance of assessing the CSF, the testing time needed for precise assessment has prevented its clinical application. We developed the qCSF method, a novel Bayesian adaptive psychophysical method, that can be used to provide an accurate assessment of the full CSF in a few minutes. In addition, we have conducted a number of studies to improve and validate the method, and assess its precision, accuracy, specificity, and sensitivity. (3) Clinical Applications: The qCSF method has used in clinical settings and clinical trials to reveal hidden vision loss in a number of patient populations. I will provide a few example applications.
As experimentation in the behavioral and social sciences moves from brick-and-mortar laboratories to the web, new opportunities arise in the design of experiments. By taking advantage of the new medium, experimenters can write complex computationally mediated adaptive procedures for gathering data: algorithms. Here, we explore the consequences of adopting an algorithmic approach to experiment design. We review several active experiment designs, describing their interpretation as algorithms. We then discuss software platforms for the efficient execution of these algorithms with people. Finally, we consider how machine learning can optimize crowdsourced experiments and form the foundation of next-generation experiment design.