See the Text: From Tokenization to Visual Reading
Ling Xing, Alex Jinpeng Wang, Rui Yan, Hongyu Qu, Zechao Li, Jinhui Tang
2025-10-23
Summary
This paper introduces a new way for computers to 'read' text, moving away from how current large language models (LLMs) process language and towards a method that mimics how humans visually recognize words.
What's the problem?
Current LLMs break down text into small pieces called 'subwords' to understand it. This works well for languages with lots of digital resources, like English, but it's inefficient for languages with fewer resources because it creates many meaningless fragments. This increases the amount of computation needed and makes it harder for the model to grasp the language's structure.
What's the solution?
The researchers developed a method called SeeTok that turns text into images. Then, it uses LLMs that are already good at understanding both images and text to 'read' these images. This leverages existing abilities in optical character recognition (OCR) and understanding the connection between visuals and text, avoiding the need to break text into small, meaningless pieces. Essentially, the computer 'sees' the word instead of analyzing its parts.
Why it matters?
SeeTok is significant because it requires far fewer processing steps and tokens, making it faster and more efficient, especially for less common languages. It also makes the model more resilient to errors like typos and better at understanding different writing styles. Ultimately, this research represents a step towards building AI that understands language more like humans do, by focusing on the visual form of words.
Abstract
People see text. Humans read by recognizing words as visual objects, including their shapes, layouts, and patterns, before connecting them to meaning, which enables us to handle typos, distorted fonts, and various scripts effectively. Modern large language models (LLMs), however, rely on subword tokenization, fragmenting text into pieces from a fixed vocabulary. While effective for high-resource languages, this approach over-segments low-resource languages, yielding long, linguistically meaningless sequences and inflating computation. In this work, we challenge this entrenched paradigm and move toward a vision-centric alternative. Our method, SeeTok, renders text as images (visual-text) and leverages pretrained multimodal LLMs to interpret them, reusing strong OCR and text-vision alignment abilities learned from large-scale multimodal training. Across three different language tasks, SeeTok matches or surpasses subword tokenizers while requiring 4.43 times fewer tokens and reducing FLOPs by 70.5%, with additional gains in cross-lingual generalization, robustness to typographic noise, and linguistic hierarchy. SeeTok signals a shift from symbolic tokenization to human-like visual reading, and takes a step toward more natural and cognitively inspired language models.