< Explain other AI papers

IHEval: Evaluating Language Models on Following the Instruction Hierarchy

Zhihan Zhang, Shiyang Li, Zixuan Zhang, Xin Liu, Haoming Jiang, Xianfeng Tang, Yifan Gao, Zheng Li, Haodong Wang, Zhaoxuan Tan, Yichuan Li, Qingyu Yin, Bing Yin, Meng Jiang

2025-02-18

IHEval: Evaluating Language Models on Following the Instruction
  Hierarchy

Summary

This paper talks about IHEval, a new way to test how well AI language models (LMs) understand and follow instructions when given multiple commands of different importance. It's like checking if a smart computer can prioritize what its boss says over what a random person tells it to do.

What's the problem?

AI language models are getting really good at following instructions, but they're not great at figuring out which instructions are more important when they get conflicting commands. This can make the AI behave in ways that aren't safe or consistent, kind of like a student who listens to bad advice from a classmate instead of following the teacher's instructions.

What's the solution?

The researchers created IHEval, a special test with over 3,500 examples across nine different tasks. These tasks check how well AI models handle instructions when they're given in a specific order of importance, from most important (system messages) to least important (tool outputs). They tested popular AI models to see how well they could follow this 'instruction hierarchy,' especially when given conflicting instructions.

Why it matters?

This matters because as AI becomes more common in our daily lives, we need to make sure it can reliably follow the right instructions and ignore potentially harmful or less important ones. The study shows that even the best AI models struggle with this, which means we need to focus on improving how AI understands and prioritizes instructions. This could make AI systems safer and more trustworthy for everyone to use.

Abstract

The instruction hierarchy, which establishes a priority order from system messages to user messages, conversation history, and tool outputs, is essential for ensuring consistent and safe behavior in language models (LMs). Despite its importance, this topic receives limited attention, and there is a lack of comprehensive benchmarks for evaluating models' ability to follow the instruction hierarchy. We bridge this gap by introducing IHEval, a novel benchmark comprising 3,538 examples across nine tasks, covering cases where instructions in different priorities either align or conflict. Our evaluation of popular LMs highlights their struggle to recognize instruction priorities. All evaluated models experience a sharp performance decline when facing conflicting instructions, compared to their original instruction-following performance. Moreover, the most competitive open-source model only achieves 48% accuracy in resolving such conflicts. Our results underscore the need for targeted optimization in the future development of LMs.