< Explain other AI papers

PixelWorld: Towards Perceiving Everything as Pixels

Zhiheng Lyu, Xueguang Ma, Wenhu Chen

2025-02-03

PixelWorld: Towards Perceiving Everything as Pixels

Summary

This paper talks about a new way to make AI understand different types of information (like text, images, and code) by turning everything into pictures. The researchers created a system called PixelWorld to test how well AI models can handle this approach.

What's the problem?

Current AI systems treat words and images differently, which isn't how humans perceive things. As AI becomes more like robots in the real world, it needs to process information more like we do. The problem is that AI doesn't have a unified way to understand all types of information, which can limit its ability to interact with the world naturally.

What's the solution?

The researchers came up with an idea called 'Perceive Everything as Pixels' (PEAP). This means turning all types of information - whether it's text, tables, code, or images - into pixel-based pictures. They created PixelWorld, a special testing system, to see how well different AI models could handle this new way of processing information. They tested big and small AI models to see how they performed with this pixel-based approach.

Why it matters?

This research matters because it could lead to AI that understands the world more like humans do. If successful, it could make AI better at tasks that involve different types of information at once, like understanding a textbook with words and diagrams. It could also help robots interact more naturally with their surroundings. While the results show that current AI models can handle this approach pretty well, there's still room for improvement. This opens up new possibilities for making AI smarter and more versatile in the future.

Abstract

Existing foundation models typically process visual input as pixels and textual input as tokens, a paradigm that contrasts with human perception, where both modalities are processed in a unified manner. With the rise of embodied and agentic AI, where inputs primarily come from camera pixels, the need for a unified perception framework becomes increasingly evident. In this paper, we propose to unify all modalities (text, tables, code, diagrams, images, etc) as pixel inputs, i.e. "Perceive Everything as Pixels" (PEAP). We introduce PixelWorld, a novel evaluation suite that unifies all the mentioned modalities into pixel space to gauge the existing models' performance. Our findings show that (1) PEAP outperforms baseline with token-based input in multimodal datasets, benefiting from unified input for better disambiguation, (2) significant declines in reasoning and coding capabilities across all models when processing pixel-based input, underscoring the need to enhance foundation models' perceptual abilities, (3) larger models can maintain strong performance on non-reasoning tasks under PEAP, while smaller models like Phi-3.5-V suffer significant performance degradation, (4) the attention pattern of PEAP is highly aligned with text token input, (5) PEAP can be accelerated significantly by exploiting the spatial sparsity. We conclude that the existing frontier models are competent in pixel perception, however, there is still headroom for improvement. Our code, dataset will be released upon acceptance.