Cognitive Flexibility in Cognitive Architecture: Simulating using Contextual Learning in PRIMs
The universal flexibility of biological systems needs to be reflected in cognitive architecture. In PRIMs, we attempt to achieve flexibility through a bottom-up approach. Using contextual learning, randomly firing of a set of instantiated primitive operators are gradually organized into context-sensitive operator firing sequences (i.e., primordial “skills”). Based on this implementation, the preliminary results of the model simulated the averaged single-pattern processing latency that is consistent with infants’ differential focusing time in three theoretically controversial artificial language studies, namely Saffran, Aslin, and Newport (1996), Marcus, Vijayan, Rao, and Vishton (1999), and Gomez (2002). In our ongoing work, we are analyzing (a) whether the model can arrive at primordial “skills” adaptive to the trained tasks, and (b) whether the learned chunks mirror the trained patterns.
There is nothing here yet. Be the first to create a thread.
Cite this as: