By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
This paper will explore ways of computationally accounting for the metacognitive threshold - the minimum amount of stimulus needed for a mental state to be perceived - and discuss potential cognitive mechanisms by which this threshold can be influenced by metacognitive training. We apply a metacognitive skill framework to help explain how the metacognitive threshold can be lowered to allow for greater perceptual access to one’s own cognitive states.
This is an in-person presentation on July 20, 2023 (11:00 ~ 11:20 UTC).
The human mind relies on similarity to organize the world around it. A geometric approach to similarity, which assumes that two objects' similarity decreases with the sum of their feature value differences, has been particularly influential. Yet, geometric similarities are claimed to consider only differing features but ignore common features, which is inconsistent with human similarity judgments that get larger with additional common features (the common features effect). This paper shows that a relative attention mechanism, as it is implemented in current cognitive models based on geometric similarities, can naturally predict the common features effect by weighting each feature value difference with the share of attention allocated to the feature. Additional common features draw away attention from the already present features, which entails that the differences between objects with respect to already present features receive less weight, resulting in a higher similarity. The ability of the geometric similarity theory with relative attention to predict the common features effect is illustrated for data from Gati and Tversky (1984) and for data from a new pairwise similarity judgment experiment.
This is an in-person presentation on July 20, 2023 (11:20 ~ 11:40 UTC).
We present a novel cognitive model of reading based on a continuous flow of information approach, where partial information from different levels of representation is continuously being made available to next levels. In an example application, we implement the model in a hierarchical Bayesian framework and fit it to self-paced reading times data: a reading task where one word is presented at a time and the presentation time is controlled by the experimental subject. The results show that the model provides a reasonable fit to word-level reading times, and can account for two previously observed findings: (i) reading times are much shorter than the minimum time required for all cognitive processes that should take place, and (ii) the processing difficulty of a word affects the reading times of subsequent words (i.e., spillover or lag effects). Computational models have explained these findings through parafoveal preview, that is, the partial processing of upcoming words during reading before they are directly fixated by the eyes. Our model provides an explanation for these findings that is relevant for natural reading, but also, crucially, for self-paced reading, where parafoveal preview is not possible.
This is an in-person presentation on July 20, 2023 (11:40 ~ 12:00 UTC).