Using sampling algorithms to explain human random generation
Many computational approaches to cognition argue that people's decisions are based on examples drawn from memory. But what mechanism do people use to come up with those examples? In this work, we study how the mind generates these examples by asking participants to produce long sequences of items at random. Although previous random generation research has exclusively focused on uniform distributions, we find that people can generate items from more complex distributions (such as people's heights), while showcasing the same systematic deviations from true randomness. We propose that to produce new items, people employ an internal sampling algorithm like those used in computer science – algorithms which have previously been used to explain other features of human behavior such as how people reason with probabilities. We find that these algorithms approximate people's random sequences better than previous computational models. We then evaluate which different qualitative components of the sampling algorithms better emulate human behavior: We find that people's sequences are most similar to samplers that propose new states based on the gradient of the space (such as HMC) and which run several replicas at different temperatures (such as MC3). By identifying the algorithms used in random generation, our results may be used to create more accurate sequential sampling models of decision making that better reflect how evidence is accumulated.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: