Dynamic tracking and adaptive optimal training of decision-making behavior
While animal training is an essential part of modern psychology experiments, the training protocol is usually hand-designed, relying heavily on trainer intuition and guesswork. I will present a general framework that takes animal training to a quantitative problem, building on ideas from reinforcement learning and optimal experimental design. Our work addresses two interesting problems at once: First, we develop an efficient method to characterize an animal's behavioral dynamics during learning, and infer the learning rules underlying its behavioral changes. Second, we formulate a theory for optimal training, which involves selecting sequences of stimuli that will drive the animal’s internal policy toward a desired state in the parameter space according to the inferred learning rules.
Keywords
Topics
great talk! I'm curious about your history weight. You mention that it governs a "win-stay lose-shift" tendency. Does it act like a windowing of the prior trials? or some weighting over the most recent trial?
Great talk, very clear. I liked your AlignMax utility for choosing optimal training stimuli. The idea is quite novel and elegant. In our lab, we've been also interested in applying OED for optimal behavioral training with children as well as adults in numerical cognition experiments. We might want to try AlignMax or its variant as an objective func...
Cite this as: