Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Reinforcement Learning for Production-based Cognitive Models

Authors
Adrian Brasoveanu
UC Santa Cruz ~ Linguistics
Jakub Dotlacil
Utrecht University, The Netherlands
Abstract

We introduce a framework in which we can start exploring in a computationally explicit way how complex, mechanistically specified, and production-based cognitive models of linguistic skills, e.g., ACT-R based parsers, can be acquired. Linguistic cognitive model learnability is a largely understudied issue, primarily because computationally explicit cognitive models are only starting to be more widely used in psycholinguistics. Cognitive models for linguistic skills pose this learnability problem much more starkly than models for other ‘high-level’ cognitive processes, since cognitive models that use theoretically-grounded linguistic representations and processes call for richly structured representations and complex rules that require a significant amount of hand-coding.

Discussion
New
how does this generalize to other tasks? Last updated 3 years ago

Cool stuff! How do you think this may generalize to other tasks? And how is this mechanism of RL different from the Act-transfer process (based on production compilation) that has been proposed by Niels Taatgen for learning to string together productions in the correct order.

Dr. Marieke Van Vugt 0 comments
Cite this as:

Brasoveanu, A., & Dotlacil, J. (2020, July). Reinforcement Learning for Production-based Cognitive Models. Paper presented at Virtual MathPsych/ICCM 2020. Via mathpsych.org/presentation/142.