Systems alignment in concept learning: evidence from children's early concepts and beyond
The standard view of learning - be it supervised, unsupervised, or semi-supervised - is event-based (e.g., a caregiver pointing to a dog and saying “dog”). However, recent work suggests that people also engage in a process called systems alignment in learning contexts. It has been shown that similarity structures align across domains. For example, objects that are spoken about in similar contexts appear in similar visual contexts. This is a potential rich source of information that human leaners could exploit. Indeed, recent work demonstrates that humans make use of alignable signals when they are available, both to improve learning efficiency and to perform zero-shot generalisation. Here, we present evidence which suggests that alignment processes could play a role in early concept acquisition. We find that children’s early concepts form near-optimal sets for inferring new concepts through systems alignment. By analysing the structural features of early concept sets, we find that this is facilitated by their uniquely dense connectivity. We suggest that this is conducive to alignment because short-range semantic relationships are particularly stable. Feeding these insights from early concept acquisition back into a Machine Learning pipeline, we build generative models which leverage these key structural features to construct optimal knowledge states. The resultant concept sets demonstrate an improved capacity for learning new concepts. Further inspired by these findings, we discuss the use of alignment-based priors for cross-modal learning in other Machine Learning systems, for example in the task of image classification.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: