Reinforcement Learning for Production-based Cognitive Models
We introduce a framework in which we can start exploring in a computationally explicit way how complex, mechanistically specified, and production-based cognitive models of linguistic skills, e.g., ACT-R based parsers, can be acquired. Linguistic cognitive model learnability is a largely understudied issue, primarily because computationally explicit cognitive models are only starting to be more widely used in psycholinguistics. Cognitive models for linguistic skills pose this learnability problem much more starkly than models for other ‘high-level’ cognitive processes, since cognitive models that use theoretically-grounded linguistic representations and processes call for richly structured representations and complex rules that require a significant amount of hand-coding.
Cool stuff! How do you think this may generalize to other tasks? And how is this mechanism of RL different from the Act-transfer process (based on production compilation) that has been proposed by Niels Taatgen for learning to string together productions in the correct order.
Cite this as: