Performance of the volatile Kalman filter in the reversal learning paradigm
The delta learning rule offers a simple but powerful explanation of feedback-based learning. However, normative theories predict that the learning rate should depend on uncertainty, which would allow for more efficient learning processes especially when environments are volatile, such as in the reversal learning paradigm. This paradigm consists of an acquisition phase, in which the participant learns some properties of the environment (e.g., reward probability of each choice option), is followed by the reversal phase, in which those statistical properties are switched. In two datasets, we previously demonstrated that the delta rule fails to capture the speed at which participants adapt to the reversal. A mechanism that allows the learning rate to vary as a function of the volatility of the environment could potentially provide a better account of learning behavior in this paradigm. Here, we studied whether the volatile Kalman filter (Piray and Daw, 2020) better accounts for empirical data in the reversal learning paradigm, and include tests of parameter recovery.
There is nothing here yet. Be the first to create a thread.
Cite this as: