Glyph: Scaling Context Windows via Visual-Text Compression
Jiale Cheng, Yusen Liu, Xinyu Zhang, Yulin Fei, Wenyi Hong, Ruiliang Lyu, Weihan Wang, Zhe Su, Xiaotao Gu, Xiao Liu, Yushi Bai, Jie Tang, Hongning Wang, Minlie Huang
2025-10-21
Summary
This paper introduces a new way to handle very long pieces of text with large language models, called Glyph. Instead of trying to directly process millions of words, it converts the text into an image and then uses models designed for both images and text to understand it.
What's the problem?
Large language models are getting better at understanding long documents, code, and complex problems, but processing extremely long inputs – like a million words – requires a huge amount of computing power and memory. This makes it difficult and expensive to actually use these models with really long texts.
What's the solution?
The researchers developed Glyph, which essentially takes a long text and 'renders' it into a visual image. This image is then fed into a vision-language model, which is designed to understand both images and text. They also used another language model to automatically figure out the best way to create these images to balance how much the text is compressed and how accurately the meaning is preserved. This approach significantly reduces the amount of data the model needs to process.
Why it matters?
Glyph makes it possible to work with extremely long texts using existing models without needing massive amounts of computing resources. It speeds up processing and training, and even allows smaller models to handle tasks that previously required much larger ones. Plus, converting text to images can actually help with other tasks that involve understanding both text and visuals, like analyzing documents.
Abstract
Large language models (LLMs) increasingly rely on long-context modeling for tasks such as document understanding, code analysis, and multi-step reasoning. However, scaling context windows to the million-token level brings prohibitive computational and memory costs, limiting the practicality of long-context LLMs. In this work, we take a different perspective-visual context scaling-to tackle this challenge. Instead of extending token-based sequences, we propose Glyph, a framework that renders long texts into images and processes them with vision-language models (VLMs). This approach substantially compresses textual input while preserving semantic information, and we further design an LLM-driven genetic search to identify optimal visual rendering configurations for balancing accuracy and compression. Through extensive experiments, we demonstrate that our method achieves 3-4x token compression while maintaining accuracy comparable to leading LLMs such as Qwen3-8B on various long-context benchmarks. This compression also leads to around 4x faster prefilling and decoding, and approximately 2x faster SFT training. Furthermore, under extreme compression, a 128K-context VLM could scale to handle 1M-token-level text tasks. In addition, the rendered text data benefits real-world multimodal tasks, such as document understanding. Our code and model are released at https://github.com/thu-coai/Glyph.