Pretraining Language Models for Diachronic Linguistic Change Discovery
Elisabeth Fittschen, Sabrina Li, Tom Lippincott, Leshem Choshen, Craig Messner
2025-04-10

Summary
This paper talks about training AI language models to detect how language changes over time, like spotting when new words appear or grammar shifts, by teaching them with texts from specific historical periods.
What's the problem?
Existing AI models trained on modern texts don’t properly understand historical language changes, and methods like tweaking pre-trained models still mix up info from different time periods, making it hard to study how language evolves.
What's the solution?
The researchers created smaller, faster-to-train AI models using texts from five historical time slices, ensuring each model focuses only on its assigned era, which helps track real language shifts without mixing up timelines.
Why it matters?
This helps historians and linguists study language evolution more accurately, like tracking when words like ‘car’ gained new meanings, and could be used to analyze literature or historical documents automatically.
Abstract
Large language models (LLMs) have shown potential as tools for scientific discovery. This has engendered growing interest in their use in humanistic disciplines, such as historical linguistics and literary studies. These fields often construct arguments on the basis of delineations like genre, or more inflexibly, time period. Although efforts have been made to restrict inference to specific domains via fine-tuning or model editing, we posit that the only true guarantee is domain-restricted pretraining -- typically, a data- and compute-expensive proposition. We show that efficient pretraining techniques can produce useful models over corpora too large for easy manual inspection but too small for "typical" LLM approaches. We employ a novel date-attribution pipeline in order to obtain a temporally-segmented dataset of five 10-million-word slices. We train two corresponding five-model batteries over these corpus segments, efficient pretraining and Llama3-8B parameter efficiently finetuned. We find that the pretrained models are faster to train than the finetuned baselines and that they better respect the historical divisions of our corpus. Emphasizing speed and precision over a-historical comprehensiveness enables a number of novel approaches to hypothesis discovery and testing in our target fields. Taking up diachronic linguistics as a testbed, we show that our method enables the detection of a diverse set of phenomena, including en masse lexical change, non-lexical (grammatical and morphological) change, and word sense introduction/obsolescence. We provide a ready-to-use pipeline that allows extension of our approach to other target fields with only minimal adaptation.