Using Cognitive Diagnostic Modeling to Investigate Learning Taxonomy Assumptions
Bloom’s Taxonomy (BT) (Bloom, 1956) and Bloom’s Revised Taxonomy (BRT) (Anderson et al., 2001) are widely used to guide the design and evaluation of learning assessments, but few studies have investigated the underlying assumptions of such taxonomies. Data from two undergraduate social psychology multiple-choice exams were analyzed using CDM. One exam was 33 questions and taken by 86 students, and the other exam was 58 questions and taken by 47 students. We used key words in exam questions to sort them into one of the skill categories that constitute the “understanding” rung of BRT’s cognitive processes hierarchy: “Explaining” (“E”), “Classifying/Comparing” (“CC”), or “Inferring/Interpreting” (“II”). Next, we specified two Deterministic Noisy Input And (DINA) models, which predict the probability of correctly answering an exam question. The “Exclusive Resources” (ER) model assumed items required only the latent skill corresponding to its category. The second model, a “Shared Resources” (SR) model representative of BRT, included the additional specification that all items require a common latent skill. Both the BIC (Bayesian Information Criterion) and sampling error were estimated using nonparametric bootstrapping methods, and the Bayes Factor (BF) was calculated from the average BICs. The BF analysis indicated that the ER model was more likely than the SR model for both exams. These findings contradict a foundational assumption of BT and BRT that higher-order inference involving explaining, classifying/comparing, and inferring/interpreting requires the existence of a shared latent skill (e.g., remembering). The relevance of this methodology for evaluating learning taxonomy assumptions using CDMs is discussed.