TPLA: Tensor Parallel Latent Attention for Efficient Disaggregated Prefill \& Decode Inference
Xiaojuan Tang, Fanxu Meng, Pingzhi Tang, Yuxuan Wang, Di Yin, Xing Sun, Muhan Zhang
2025-08-25
Summary
This paper introduces a new method called Tensor-Parallel Latent Attention (TPLA) to make large language models faster and more efficient, especially when dealing with very long pieces of text.
What's the problem?
Existing methods for reducing memory usage in large language models, like Multi-Head Latent Attention (MLA), become less effective when the work is split across multiple computer processors (a technique called tensor parallelism). This is because each processor still needs to access the entire memory cache, defeating the purpose of compression. Essentially, the speed benefits of MLA disappear when you try to scale it up using multiple processors.
What's the solution?
TPLA solves this by breaking up the compressed memory cache *and* the input data for each attention head into smaller pieces and distributing those pieces across the different processors. Each processor then works on its piece independently, and the results are combined. This allows for efficient use of tensor parallelism *while* still benefiting from the memory savings of MLA. They also found that applying simple transformations to the data before splitting it up further improves performance.
Why it matters?
This is important because it allows for faster processing of very long texts – up to 32,000 tokens – without sacrificing accuracy. The researchers demonstrated significant speedups with models like DeepSeek-V3 and Kimi-K2, meaning these models can generate responses more quickly. It also works seamlessly with models already trained using MLA, so no retraining is needed, and it can be combined with other speed-up techniques like FlashAttention-3.
Abstract
Multi-Head Latent Attention (MLA), introduced in DeepSeek-V2, compresses key-value states into a low-rank latent vector, caching only this vector to reduce memory. In tensor parallelism (TP), however, attention heads are computed across multiple devices, and each device must load the full cache, eroding the advantage of MLA over Grouped Query Attention (GQA). We propose Tensor-Parallel Latent Attention (TPLA): a scheme that partitions both the latent representation and each head's input dimension across devices, performs attention independently per shard, and then combines results with an all-reduce. TPLA preserves the benefits of a compressed KV cache while unlocking TP efficiency. Unlike Grouped Latent Attention (GLA), every head in TPLA still leverages the full latent representation, maintaining stronger representational capacity. TPLA is drop-in compatible with models pre-trained using MLA: it supports MLA-style prefilling and enables efficient tensor-parallel decoding without retraining. Applying simple orthogonal transforms -- e.g., the Hadamard transform or PCA -- before TP slicing further mitigates cross-shard interference, yielding minimal accuracy degradation. By reducing the per-device KV cache for DeepSeek-V3 and Kimi-K2, we achieve 1.79x and 1.93x speedups, respectively, at a 32K-token context length while maintaining performance on commonsense and LongBench benchmarks. TPLA can be implemented with FlashAttention-3, enabling practical end-to-end acceleration.