< Explain other AI papers

Efficient LLM Scheduling by Learning to Rank

Yichao Fu, Siqi Zhu, Runlong Su, Aurick Qiao, Ion Stoica, Hao Zhang

2024-08-29

Efficient LLM Scheduling by Learning to Rank

Summary

This paper introduces a new method for scheduling requests in large language model (LLM) systems to improve efficiency and reduce delays when processing user requests.

What's the problem?

In LLM systems, requests are often handled in the order they arrive (first-come-first-serve), which can lead to delays and inefficiencies, especially when some requests take longer to process than others. This can result in a situation called Head-Of-Line (HOL) blocking, where shorter requests are stuck behind longer ones, reducing overall performance and user satisfaction.

What's the solution?

The authors propose a new scheduling method that uses a technique called learning to rank. Instead of trying to predict exactly how long each request will take, this method ranks the requests based on their expected output lengths. By doing this, the system can prioritize shorter requests, similar to a shortest-job-first approach. They tested this new scheduler and found that it significantly improved performance, achieving 2.8 times lower latency for chatbots and 6.5 times higher throughput for generating synthetic data compared to traditional methods.

Why it matters?

This research is important because it enhances the efficiency of LLM systems, making them faster and more responsive. By improving how these systems handle requests, users can receive quicker responses, which is crucial for applications like chatbots and data generation tools. This advancement could lead to better user experiences in various AI applications.

Abstract

In Large Language Model (LLM) inference, the output length of an LLM request is typically regarded as not known a priori. Consequently, most LLM serving systems employ a simple First-come-first-serve (FCFS) scheduling strategy, leading to Head-Of-Line (HOL) blocking and reduced throughput and service quality. In this paper, we reexamine this assumption -- we show that, although predicting the exact generation length of each request is infeasible, it is possible to predict the relative ranks of output lengths in a batch of requests, using learning to rank. The ranking information offers valuable guidance for scheduling requests. Building on this insight, we develop a novel scheduler for LLM inference and serving that can approximate the shortest-job-first (SJF) schedule better than existing approaches. We integrate this scheduler with the state-of-the-art LLM serving system and show significant performance improvement in several important applications: 2.8x lower latency in chatbot serving and 6.5x higher throughput in synthetic data generation. Our code is available at https://github.com/hao-ai-lab/vllm-ltr.git