Kai Preuss
Nele Russwinkel
Bespoke cognitive models of mental spatial transformation, like those used in mental rotation tasks, can generate a very close fit to human data. However these models usually lack grounding to a common spatial theory. In turn, this makes it difficult to assess their validity and impedes research insights that go beyond task-specific limitations. We introduce a spatial module for the cognitive architecture ACT-R, serving as a framework offering unified mechanisms for mental spatial transformation to try and alleviate those problems. This module combines symbolic semantic and spatial information processing for three-dimensional objects, while suggesting constraints on this processing to ensure high theoretical validity and cognitive plausibility. A mental rotation model was created to make use of this module, avoiding custom-made mechanisms in favor of a generalizable approach. Results of a mental rotation experiment are reproduced well by the model, including effects of rotation disparity and improvement over time on reaction times. Based on this, the spatial module might serve as a stepping stone towards unified, application-oriented research into mental spatial transformation.