< Explain other AI papers

Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models

Jiaming Li, Lei Zhang, Yunshui Li, Ziqiang Liu, yuelin bai, Run Luo, Longze Chen, Min Yang

2024-10-01

Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models

Summary

This paper introduces Ruler, a method that helps Large Language Models (LLMs) generate responses of specific lengths, making it easier for users to get the answers they need.

What's the problem?

Large Language Models often struggle to produce responses that meet specific length requirements. This can be frustrating for users who want concise answers or detailed explanations but find that the models either give too short or too long responses.

What's the solution?

The authors propose a new approach called Ruler, which uses something called Meta Length Tokens (MLTs) to help LLMs understand and follow length constraints in their responses. Ruler can generate these tokens even when no specific length is provided, allowing the model to adapt its output accordingly. The paper also introduces a Target Length Generation Task (TLG) to evaluate how well the models perform at meeting these length requirements.

Why it matters?

This research is important because it improves how LLMs respond to user requests by ensuring that the answers are the right length. This can enhance user experience and make AI tools more effective in various applications, such as education, customer service, and content creation.

Abstract

The instruction-following ability of large language models enables humans to interact with AI agents in a natural way. However, when required to generate responses of a specific length, large language models often struggle to meet users' needs due to their inherent difficulty in accurately perceiving numerical constraints. To explore the ability of large language models to control the length of generated responses, we propose the Target Length Generation Task (TLG) and design two metrics, Precise Match (PM) and Flexible Match (FM) to evaluate the model's performance in adhering to specified response lengths. Furthermore, we introduce a novel, model-agnostic approach called Ruler, which employs Meta Length Tokens (MLTs) to enhance the instruction-following ability of large language models under length-constrained instructions. Specifically, Ruler equips LLMs with the ability to generate responses of a specified length based on length constraints within the instructions. Moreover, Ruler can automatically generate appropriate MLT when length constraints are not explicitly provided, demonstrating excellent versatility and generalization. Comprehensive experiments show the effectiveness of Ruler across different LLMs on Target Length Generation Task, e.g., at All Level 27.97 average gain on PM, 29.57 average gain on FM. In addition, we conduct extensive ablation experiments to further substantiate the efficacy and generalization of Ruler. Our code and data is available at https://github.com/Geaming2002/Ruler.