Janus: Disaggregating Attention and Experts for Scalable MoE Inference
Zhexiang Zhang, Ye Wang, Xiangyu Wang, Yumiao Zhao, Jingzhe Jiang, Qizhen Weng, Shaohuai Shi, Yin Chen, Minchen Yu
2025-12-17
Summary
This paper introduces Janus, a new system designed to make running very large AI models, specifically those using a 'Mixture-of-Experts' approach, much more efficient and scalable.
What's the problem?
Running these large AI models is really hard because they require a ton of computing power and the workload changes constantly. Current systems treat all parts of the model the same, giving them the same amount of resources, even though some parts, like the 'attention' and 'expert' sections, have different needs. This leads to wasted resources and limits how big these models can get.
What's the solution?
Janus solves this by splitting the model into two separate groups of computers: one for the 'attention' part and one for the 'expert' part. This allows each part to be managed and scaled independently. It also uses a clever communication system to quickly move data between these groups, a scheduler to evenly distribute work, and a system to dynamically adjust resources based on what the model needs at any given moment.
Why it matters?
This work is important because it allows us to run much larger and more powerful AI models without needing a massive, single computer. By improving efficiency and scalability, Janus makes these advanced AI technologies more accessible and practical for a wider range of applications, and it significantly speeds up how quickly these models can process information.
Abstract
Large Mixture-of-Experts (MoE) model inference is challenging due to high resource demands and dynamic workloads. Existing solutions often deploy the entire model as a single monolithic unit, which applies a unified resource configuration to both attention and expert modules despite their different requirements, leading to limited scalability and resource inefficiency. In this paper, we propose Janus, a scalable MoE inference system that disaggregates attention and experts on separate GPU sub-clusters, enabling each module to be managed and scaled independently. Janus incorporates three key designs for efficient, disaggregated MoE inference. First, it proposes an adaptive two-phase communication scheme that exploits intra- and inter-node bandwidth hierarchies for low-latency data exchange. Second, motivated by the memory-bound nature of MoE modules, Janus introduces a lightweight scheduler and implements it as a GPU kernel to balance the number of activated experts across GPUs at minimal overhead, thereby reducing inference latency. Third, Janus performs fine-grained resource management to dynamically adjust expert placement and independently scale attention and MoE resources to improve overall efficiency. Evaluation shows Janus achieves up to 3.9 higher perGPU throughput than state-of-the-art systems while meeting per-token latency requirements.