< Explain other AI papers

TiKMiX: Take Data Influence into Dynamic Mixture for Language Model Pre-training

Yifan Wang, Binbin Liu, Fengze Liu, Yuanfan Guo, Jiyao Deng, Xuecheng Wu, Weidong Zhou, Xiaohuan Zhou, Taifeng Wang

2025-09-01

TiKMiX: Take Data Influence into Dynamic Mixture for Language Model Pre-training

Summary

This paper focuses on improving how language models are trained by carefully controlling the mix of data they learn from, and dynamically adjusting that mix during the training process.

What's the problem?

When training large language models, the data used is usually combined in a fixed way. However, what the model *needs* to learn from different types of data changes as it gets better. For example, early on it might need lots of basic grammar practice, but later it needs more complex reasoning examples. Figuring out how to track these changing needs efficiently is really hard, because checking the impact of different data mixes takes a lot of computing power.

What's the solution?

The researchers developed a method called TiKMiX that automatically adjusts the data mix as the model trains. It does this by measuring how much each type of data influences the model’s learning – they call this 'Group Influence'. TiKMiX then tries to find the best data mix that maximizes this influence. They actually created two versions: TiKMiX-D which directly searches for the best mix, and TiKMiX-M which *predicts* a good mix using a separate model. They tested these on huge datasets, even up to a trillion words!

Why it matters?

This work is important because it shows that dynamically adjusting the data mix leads to significantly better language models. TiKMiX is more efficient than existing methods and improves performance on a variety of tasks. It demonstrates that understanding and responding to a model’s changing learning preferences is key to building more powerful AI.

Abstract

The data mixture used in the pre-training of a language model is a cornerstone of its final performance. However, a static mixing strategy is suboptimal, as the model's learning preferences for various data domains shift dynamically throughout training. Crucially, observing these evolving preferences in a computationally efficient manner remains a significant challenge. To address this, we propose TiKMiX, a method that dynamically adjusts the data mixture according to the model's evolving preferences. TiKMiX introduces Group Influence, an efficient metric for evaluating the impact of data domains on the model. This metric enables the formulation of the data mixing problem as a search for an optimal, influence-maximizing distribution. We solve this via two approaches: TiKMiX-D for direct optimization, and TiKMiX-M, which uses a regression model to predict a superior mixture. We trained models with different numbers of parameters, on up to 1 trillion tokens. TiKMiX-D exceeds the performance of state-of-the-art methods like REGMIX while using just 20% of the computational resources. TiKMiX-M leads to an average performance gain of 2% across 9 downstream benchmarks. Our experiments reveal that a model's data preferences evolve with training progress and scale, and we demonstrate that dynamically adjusting the data mixture based on Group Influence, a direct measure of these preferences, significantly improves performance by mitigating the underdigestion of data seen with static ratios.