May 25th: Henry Conklin

Inductive Biases & Compositional Generalization

Henry Conklin, University of Edinburgh

Tuesday, May 25 2021, 11:00-12:00 BST
Zoom Details: [Please Request]

Neural Networks increasingly perform a wide array of Natural Language Processing (NLP) tasks with high-fidelity. Despite this there is little evidence of their ability to generalize compositionally, or robustly outside of their training data. This disconnect between task performance and systematic generalization may best be explained as a result of underspecification. In supervised learning training data alone may not adequately specify for compositional strategies, or strategies that generalize robustly. We present experiments looking at a meta-augmented form of supervised learning as a way to incorporate prior bias into training and mitigate some issues of underspecification. We show how introducing certain kinds of bias motivated by human cognitive constraints can aid generalization performance on two different compositional generalization tasks. More broadly these experiments show how biases introduced during training could be used to condition the kinds of generalization strategies that emerge. In future we hope that introducing more biases analogous to those found in humans results in neural models that can arrive at more human-like solutions.