The effects of personalization on category learning: An empirical investigation
Personalization algorithms are broadly used on the internet to generate recommendations fine-tuned for individual users. However, these algorithms have also been discussed as the cause of the limited content diversity, which possibly leads people to confirmation bias and polarization (e.g., Pariser, 2011). The mechanism of how such algorithms affect cognitive processes and internal representations has not been understood well. In this study, we investigate how personalization techniques can hinder optimal category learning via an online behavioral experiment and a model-based analysis. In the experiment, participants first studied categories of aliens under different levels of algorithmic personalization, in addition to randomized (i.e., control) and self-directed learning conditions. After the learning phase, participants’ knowledge was tested using an independent stimulus set. The result shows that participants in the algorithmic personalization conditions develop selective sampling profiles and more distorted representations of categories. Also, participants in the personalization conditions tend to show inflated confidence, especially when they make incorrect categorization decisions. In particular, the frequency with which each category is presented during the learning phase is a key variable in explaining overconfidence. To pursue a mechanistic explanation of the personalization effect, we also fit the Adaptive Attention Representation Model (AARM; Galdo, Weichart, Sloutsky, & Turner, 2022) to the collected data. The model-based analysis suggests that it is important to comprehend how human evaluates the similarity between exemplars when stimulus information is only partially encoded. If learners assume that unobserved information would be similar to what they have already encoded, this tendency will likely result in higher confidence.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: