Autellix: An Efficient Serving Engine for LLM Agents as General Programs
Michael Luo, Xiaoxiang Shi, Colin Cai, Tianjun Zhang, Justin Wong, Yichuan Wang, Chi Wang, Yanping Huang, Zhifeng Chen, Joseph E. Gonzalez, Ion Stoica
2025-02-20
Summary
This paper talks about Autellix, a new system designed to make AI programs that use large language models (LLMs) faster and more efficient. It focuses on optimizing how these programs are processed to reduce waiting times and improve overall performance.
What's the problem?
Current systems for running AI programs often treat each individual request separately, ignoring how these requests are connected within a larger program. This can cause delays and inefficiencies, especially when programs require multiple steps that depend on each other. These long wait times slow down the AI and make it harder to handle complex tasks effectively.
What's the solution?
The researchers created Autellix, which treats entire programs as a priority instead of focusing on individual requests. Autellix uses smart scheduling algorithms to manage the order of tasks based on their importance and connections within the program. It also groups related tasks together to save computing resources and reduce delays. Experiments showed that Autellix can improve the speed of AI programs by 4 to 15 times compared to older systems.
Why it matters?
This matters because it makes AI systems much faster and better at handling complex tasks, like reasoning or solving problems that require multiple steps. By improving efficiency, Autellix can help AI applications work better in real-world scenarios, such as chatbots, automation tools, or even advanced research systems. It also reduces the strain on computing resources, making it easier to scale AI technologies for widespread use.
Abstract
Large language model (LLM) applications are evolving beyond simple chatbots into dynamic, general-purpose agentic programs, which scale LLM calls and output tokens to help AI agents reason, explore, and solve complex tasks. However, existing LLM serving systems ignore dependencies between programs and calls, missing significant opportunities for optimization. Our analysis reveals that programs submitted to LLM serving engines experience long cumulative wait times, primarily due to head-of-line blocking at both the individual LLM request and the program. To address this, we introduce Autellix, an LLM serving system that treats programs as first-class citizens to minimize their end-to-end latencies. Autellix intercepts LLM calls submitted by programs, enriching schedulers with program-level context. We propose two scheduling algorithms-for single-threaded and distributed programs-that preempt and prioritize LLM calls based on their programs' previously completed calls. Our evaluation demonstrates that across diverse LLMs and agentic workloads, Autellix improves throughput of programs by 4-15x at the same latency compared to state-of-the-art systems, such as vLLM.