By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
We introduce a framework in which we can start exploring in a computationally explicit way how complex, mechanistically specified, and production-based cognitive models of linguistic skills, e.g., ACT-R based parsers, can be acquired. Linguistic cognitive model learnability is a largely understudied issue, primarily because computationally explicit cognitive models are only starting to be more widely used in psycholinguistics. Cognitive models for linguistic skills pose this learnability problem much more starkly than models for other ‘high-level’ cognitive processes, since cognitive models that use theoretically-grounded linguistic representations and processes call for richly structured representations and complex rules that require a significant amount of hand-coding.
Human reasoning deviates from classical logic. Psychological findings demonstrate that human reasoning is nonmonotonic, i.e., new information can lead to the retraction of previous inferences, it is defeasible reasoning. It is relevant whenever no contrary information is known (defeasible reasoning), when a most likely explanation is sought (abductive reasoning), when we need to revise our initial beliefs (belief revision), or to model human `commonsense reasoning' a topic highly relevant in AI research. While analysis of population data has identified nonmonotonic features, it is an open question, if systems that capture nonmonotonic reasoning better captures individual human reasoning. In this article, we take three prominent nonmonontonic approaches, the Weak Completion Semantics, Reiters Default Logic, and OCF, a ranking on possible worlds, implement variants of them and evaluate them within the CCOBRA-framework for their predictive capability in the Suppression Task. We demonstrate that both systems achieve a high performance being able to predict on average 82% of the inference drawn by an individual reasoner. Furthermore, we can demonstrate that OCF and an improved version of Reiter make identical predictions and that abduction is relevant on the level of an individual reasoner. We discuss implications of logical systems for human reasoning.
Decisions under uncertainty are often made by weighing the expected costs and benefits of the available options. The tradeoffs of costs and benefits make some decisions easy and some difficult, particularly given uncertainty of these costs and rewards. In this research, we evaluate how a cognitive model based on Instance-Based Learning Theory (IBLT) and two well-known reinforcement learning (RL) algorithms learn to make better choices in a goal-seeking gridworld task under uncertainty and on increasing degrees of decision complexity. We also use a random agent as a base level comparison. Our results suggest that IBL and RL models are comparable in their accuracy levels on simple settings, but the RL models are more efficient than the IBL model. However, as decision complexity increases, the IBL model is not only more accurate but also more efficient than the RL models. Our results suggest that the IBL model is able to pursue highly rewarding targets even when the costs increase; while the RL models seem to get "distracted" by lower costs, reaching lower reward targets.
This paper proposes that the shape and parameter fits of existing probability weighting functions can be explained with sensitivity to uncertainty (as measured by information entropy) and the utility carried by reductions in uncertainty. Building on applications of information theoretic principles to models of perceptual and inferential processes, we suggest that probabilities are evaluated relative to a plausible expectation (the uniform distribution) and that the perceived distance between a probability and uniformity is influenced by the shape (relative entropy) of the distribution that the probability is embedded in. These intuitions are formalized in a novel probability weighting function, VWD(p), which is simpler and has less parameters than existing probability weighting functions. The proposed probability weighting function captures characteristic features of existing probability weighting functions, introduces novel predictions, and provides a parsimonious account of findings in probability and frequency estimation related tasks.
Learning occurs through the interaction of working memory (WM), declarative memory (LTM) and reinforcement learning (RL). There are vast individual differences in learning mechanism deployment and it is often difficult to assess the relative contributions of these systems during learning through behavioral measures. Collins (2018), forwarded a working memory - reinforcement learning combined model that addresses this issue but seems to lack a robust declarative memory component. In this project we built four (two single-mechanism RL and LTM, and two integrated RL-LTM) idiographic learning models based on the ACT-R cognitive architecture. We aimed to examine individual differences and fit parameters that could explain preferential use of learning mechanisms using the Collins (2018) stimulus-response association task. We found that multiple models provided best-fits for individual learners with more variability in learning and memory parameters observed even within the best fitting models.