Generating novel experimental hypotheses from language models: A case study on cross-dative generalization
Kanishka Misra, Najoung Kim
2024-08-12

Summary
This paper explores how language models can help generate new experimental ideas in studying how people learn to use language, specifically focusing on a concept called cross-dative generalization.
What's the problem?
Understanding how children learn to use language, especially in complex structures like dative constructions (e.g., 'she gave me the ball' vs. 'she gave the ball to me'), is challenging. Traditional methods of studying this often rely on observing children directly, which can be time-consuming and limited. Researchers need a way to generate new ideas for experiments that can help them understand this learning process better.
What's the solution?
The authors propose using neural network language models (LMs) as simulated learners to create new experimental hypotheses. They focus on cross-dative generalization, where a new verb is used in different ways across dative constructions. By varying the context in which these verbs are introduced, they analyze how well the LMs can apply what they've learned to new situations. This approach allows them to identify patterns similar to those seen in children's language learning, leading to new hypotheses about how certain features of language exposure can help with learning.
Why it matters?
This research is important because it shows that language models can be useful tools for generating insights into language acquisition. By using LMs to simulate learning, researchers can explore new questions and hypotheses more efficiently, potentially leading to a deeper understanding of how people learn languages and improve educational strategies.
Abstract
Neural network language models (LMs) have been shown to successfully capture complex linguistic knowledge. However, their utility for understanding language acquisition is still debated. We contribute to this debate by presenting a case study where we use LMs as simulated learners to derive novel experimental hypotheses to be tested with humans. We apply this paradigm to study cross-dative generalization (CDG): productive generalization of novel verbs across dative constructions (she pilked me the ball/she pilked the ball to me) -- acquisition of which is known to involve a large space of contextual features -- using LMs trained on child-directed speech. We specifically ask: "what properties of the training exposure facilitate a novel verb's generalization to the (unmodeled) alternate construction?" To answer this, we systematically vary the exposure context in which a novel dative verb occurs in terms of the properties of the theme and recipient, and then analyze the LMs' usage of the novel verb in the unmodeled dative construction. We find LMs to replicate known patterns of children's CDG, as a precondition to exploring novel hypotheses. Subsequent simulations reveal a nuanced role of the features of the novel verbs' exposure context on the LMs' CDG. We find CDG to be facilitated when the first postverbal argument of the exposure context is pronominal, definite, short, and conforms to the prototypical animacy expectations of the exposure dative. These patterns are characteristic of harmonic alignment in datives, where the argument with features ranking higher on the discourse prominence scale tends to precede the other. This gives rise to a novel hypothesis that CDG is facilitated insofar as the features of the exposure context -- in particular, its first postverbal argument -- are harmonically aligned. We conclude by proposing future experiments that can test this hypothesis in children.