< Explain other AI papers

UltraIF: Advancing Instruction Following from the Wild

Kaikai An, Li Sheng, Ganqu Cui, Shuzheng Si, Ning Ding, Yu Cheng, Baobao Chang

2025-02-07

UltraIF: Advancing Instruction Following from the Wild

Summary

This paper talks about UltraIF, a new method that helps AI language models better understand and follow complex instructions by breaking them into smaller, simpler parts and using open-source data.

What's the problem?

Many AI models struggle to follow complicated instructions as well as those made by leading companies. Open-source models often fall behind because they lack effective ways to handle real-world user requests, especially when the instructions are detailed or multi-step.

What's the solution?

The researchers created UltraIF, which uses a process to break down complex instructions into smaller queries and rules, then combines them back together using a system called UltraComposer. This approach allows the AI to generate and check its own responses for quality. They tested this method on the LLaMA-3.1-8B model and showed it could match more advanced instruction-following models without needing special training data.

Why it matters?

This research is important because it makes advanced instruction-following capabilities more accessible to open-source AI projects. By improving how AI handles complex tasks, UltraIF could lead to better virtual assistants and tools that can understand and respond to detailed user requests more effectively.

Abstract

Instruction-following made modern large language models (LLMs) helpful assistants. However, the key to taming LLMs on complex instructions remains mysterious, for that there are huge gaps between models trained by open-source community and those trained by leading companies. To bridge the gap, we propose a simple and scalable approach UltraIF for building LLMs that can follow complex instructions with open-source data. UltraIF first decomposes real-world user prompts into simpler queries, constraints, and corresponding evaluation questions for the constraints. Then, we train an UltraComposer to compose constraint-associated prompts with evaluation questions. This prompt composer allows us to synthesize complicated instructions as well as filter responses with evaluation questions. In our experiment, for the first time, we successfully align LLaMA-3.1-8B-Base to catch up with its instruct version on 5 instruction-following benchmarks without any benchmark information, using only 8B model as response generator and evaluator. The aligned model also achieved competitive scores on other benchmarks. Moreover, we also show that UltraIF could further improve LLaMA-3.1-8B-Instruct through self-alignment, motivating broader use cases for the method. Our code will be available at https://github.com/kkk-an/UltraIF.