Parallel Loop Transformer for Efficient Test-Time Computation Scaling
Bohong Wu, Mengzhao Chen, Xiang Luo, Shen Yan, Qifan Yu, Fan Xia, Tianqi Zhang, Hongrui Zhan, Zheng Zhong, Xun Zhou, Siyuan Qiao, Xingyan Bin
2025-10-30
Summary
This paper introduces a new way to build Large Language Models (LLMs) called the Parallel Loop Transformer (PLT) that aims to make them faster and more efficient without sacrificing accuracy.
What's the problem?
LLMs are great, but they can be slow and expensive to use because of the amount of computing power they need. One attempt to fix this, called 'looped transformers,' reuses parts of the model to save resources, but this creates a bottleneck where each step has to wait for the previous one to finish, increasing the time it takes to get an answer and using more memory.
What's the solution?
The researchers developed PLT, which tackles this problem with two main ideas. First, it runs different parts of the looping process simultaneously for different parts of the text, instead of one after the other. Second, it cleverly shares information from the initial loop across all subsequent loops, reducing memory usage and using a special attention mechanism to combine this shared knowledge with the specific details of each part of the text.
Why it matters?
PLT is important because it allows for the creation of powerful LLMs that perform as well as existing models but can run much faster and use less memory, making them more practical for real-world applications like chatbots or quick text analysis.
Abstract
Large Language Models (LLMs) are powerful but often too slow and costly for real-world use during inference. Looped transformers save on parameters by reusing the same weights for multiple computational steps, or "loops." However, this approach has a major flaw: the loops run one after another, causing inference latency and memory requirements to increase with each added loop. This makes them impractical for fast applications. To solve this problem, we introduce the Parallel Loop Transformer (PLT). PLT is a new architecture that delivers the performance benefits of a deep, looped model but with the low latency of a standard, non-looped model. PLT works using two key techniques. First, Cross-Loop Parallelism (CLP) breaks the sequential dependency by computing different loops for different tokens at the same time, all within a single pass. Second, to prevent memory costs from growing, we use an Efficient Representation Enhancement strategy. This method shares the memory (KV cache) from the first loop with all other loops. It then uses a Gated Sliding-Window Attention (G-SWA) to combine this shared global information with local information, maintaining high accuracy. Our experiments show that PLT achieves the high accuracy of a traditional looped model but with almost no extra latency or memory cost compared to a standard transformer.