< Explain other AI papers

Qwen2.5 Technical Report

Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao

2024-12-20

Qwen2.5 Technical Report

Summary

This paper introduces Qwen2.5, a new series of large language models (LLMs) that have been significantly improved to better meet various user needs. It discusses advancements in both how these models are trained and how they perform in understanding and generating language.

What's the problem?

Previous versions of language models had limitations in their training data and methods, which affected their ability to understand complex tasks and follow instructions accurately. They also struggled with generating long texts and handling structured data efficiently.

What's the solution?

Qwen2.5 addresses these issues by using a much larger dataset for training—18 trillion tokens compared to the previous 7 trillion. It also employs advanced techniques like supervised fine-tuning with over a million samples and reinforcement learning to improve how the models align with human preferences. The result is a series of models that perform better in tasks like long text generation, coding, and mathematics, with various sizes available for different applications.

Why it matters?

This research is important because it enhances the capabilities of language models, making them more effective tools for tasks like writing, coding, and problem-solving. By improving these models, Qwen2.5 can support a wider range of applications in education, technology, and beyond, ultimately helping users achieve better results in their work.

Abstract

In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-training datasets from the previous 7 trillion tokens to 18 trillion tokens. This provides a strong foundation for common sense, expert knowledge, and reasoning capabilities. In terms of post-training, we implement intricate supervised finetuning with over 1 million samples, as well as multistage reinforcement learning. Post-training techniques enhance human preference, and notably improve long text generation, structural data analysis, and instruction following. To handle diverse and varied use cases effectively, we present Qwen2.5 LLM series in rich sizes. Open-weight offerings include base and instruction-tuned models, with quantized versions available. In addition, for hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both available from Alibaba Cloud Model Studio. Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks evaluating language understanding, reasoning, mathematics, coding, human preference alignment, etc. Specifically, the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and proprietary models and demonstrates competitive performance to the state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5 times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness while performing competitively against GPT-4o-mini and GPT-4o respectively. Additionally, as the foundation, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, and multimodal models.