PositionID: LLMs can Control Lengths, Copy and Paste with Explicit Positional Awareness
Zekun Wang, Feiyu Duan, Yibo Zhang, Wangchunshu Zhou, Ke Xu, Wenhao Huang, Jie Fu
2024-10-14

Summary
This paper discusses PositionID, a new method that helps large language models (LLMs) control the length of generated text and perform copy-and-paste tasks more accurately by improving their awareness of text position.
What's the problem?
Large language models can generate impressive text, but they often struggle with tasks that require precise control over the length of the output or copying specific sections. This is mainly due to their lack of awareness about where they are in the text, which leads to errors in following length requirements or copying content accurately.
What's the solution?
The authors propose two main methods: PositionID Prompting and PositionID Fine-Tuning. These techniques enhance the model's ability to keep track of the current position in the text while generating it. This positional awareness allows the model to better manage text length and accurately perform copy-and-paste operations. They also developed benchmarks to evaluate how well these methods work in controlling length and copying text.
Why it matters?
This research is significant because it improves how LLMs handle specific tasks that require precision, making them more useful for applications like automated writing tools, where controlling text length and accuracy is crucial. By enhancing these capabilities, PositionID can lead to better performance in various real-world scenarios.
Abstract
Large Language Models (LLMs) demonstrate impressive capabilities across various domains, including role-playing, creative writing, mathematical reasoning, and coding. Despite these advancements, LLMs still encounter challenges with length control, frequently failing to adhere to specific length constraints due to their token-level operations and insufficient training on data with strict length limitations. We identify this issue as stemming from a lack of positional awareness and propose novel approaches--PositionID Prompting and PositionID Fine-Tuning--to address it. These methods enhance the model's ability to continuously monitor and manage text length during generation. Additionally, we introduce PositionID CP Prompting to enable LLMs to perform copy and paste operations accurately. Furthermore, we develop two benchmarks for evaluating length control and copy-paste abilities. Our experiments demonstrate that our methods significantly improve the model's adherence to length constraints and copy-paste accuracy without compromising response quality.