< Explain other AI papers

Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution

Zuyan Liu, Yuhao Dong, Ziwei Liu, Winston Hu, Jiwen Lu, Yongming Rao

2024-09-20

Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution

Summary

This paper introduces Oryx, a new multimodal model designed to understand visual data like images and videos at different resolutions and lengths, improving how AI processes this information.

What's the problem?

Current models often require visual data to be standardized to a fixed size, which can be inefficient and ineffective for understanding different types of visual content. This means that both short images and long videos are treated the same way, which doesn't take advantage of their unique characteristics.

What's the solution?

Oryx addresses this issue by using two main innovations: a special model called OryxViT that can handle images of any size and a dynamic compressor that can adjust the amount of detail in the visual data based on what's needed. This allows Oryx to process long videos with lower resolution while still keeping high quality for tasks that need detailed images. The model also benefits from improved training methods that help it understand both images and videos better.

Why it matters?

This research is significant because it enhances the ability of AI systems to interpret a wide range of visual inputs more effectively. By allowing for flexible processing of different types of visual data, Oryx can improve applications in fields like video analysis, content creation, and virtual reality, making AI more useful in everyday tasks.

Abstract

Visual data comes in various forms, ranging from small icons of just a few pixels to long videos spanning hours. Existing multi-modal LLMs usually standardize these diverse visual inputs to a fixed resolution for visual encoders and yield similar numbers of tokens for LLMs. This approach is non-optimal for multimodal understanding and inefficient for processing inputs with long and short visual contents. To solve the problem, we propose Oryx, a unified multimodal architecture for the spatial-temporal understanding of images, videos, and multi-view 3D scenes. Oryx offers an on-demand solution to seamlessly and efficiently process visual inputs with arbitrary spatial sizes and temporal lengths through two core innovations: 1) a pre-trained OryxViT model that can encode images at any resolution into LLM-friendly visual representations; 2) a dynamic compressor module that supports 1x to 16x compression on visual tokens by request. These design features enable Oryx to accommodate extremely long visual contexts, such as videos, with lower resolution and high compression while maintaining high recognition precision for tasks like document understanding with native resolution and no compression. Beyond the architectural improvements, enhanced data curation and specialized training on long-context retrieval and spatial-aware data help Oryx achieve strong capabilities in image, video, and 3D multimodal understanding simultaneously. Our work is open-sourced at https://github.com/Oryx-mllm/Oryx.