< Explain other AI papers

OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents

Zihao Wang, Shaofei Cai, Zhancun Mu, Haowei Lin, Ceyao Zhang, Xuejie Liu, Qing Li, Anji Liu, Xiaojian Ma, Yitao Liang

2024-07-02

OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents

Summary

This paper talks about OmniJARVIS, a new AI model designed to help agents in open-world environments like Minecraft understand and follow instructions. It combines visual information, language, and actions into a unified system that improves how these agents perform tasks.

What's the problem?

Previous AI models often treated vision, language, and actions separately, which limited their ability to understand complex instructions in real-world scenarios. This separation made it difficult for these models to effectively learn how to connect what they see with what they need to do based on spoken or written commands. As a result, the agents struggled with reasoning and decision-making in dynamic environments.

What's the solution?

To solve this problem, the authors developed OmniJARVIS, which uses a unified approach to combine visual, language, and action information. They introduced a behavior encoder that creates tokens representing different actions and a policy decoder that helps the model decide what to do next based on these tokens. By packing various interactions—like task instructions and observations—into a single sequence of tokens, OmniJARVIS can better understand and execute complex tasks. This model can reason through problems, plan actions, answer questions, and perform tasks effectively in Minecraft.

Why it matters?

This research is important because it enhances the capabilities of AI agents in understanding and interacting with their environments. By integrating vision, language, and actions into one system, OmniJARVIS can perform better in real-world applications where flexibility and understanding are crucial. This advancement could lead to more intelligent AI systems that can assist in various fields such as robotics, gaming, and virtual assistants.

Abstract

We present OmniJARVIS, a novel Vision-Language-Action (VLA) model for open-world instruction-following agents in open-world Minecraft. Compared to prior works that either emit textual goals to separate controllers or produce the control command directly, OmniJARVIS seeks a different path to ensure both strong reasoning and efficient decision-making capabilities via unified tokenization of multimodal interaction data. First, we introduce a self-supervised approach to learn a behavior encoder that produces discretized tokens for behavior trajectories tau = {o_0, a_0, dots} and an imitation learning (IL) policy decoder conditioned on these tokens. These additional behavior tokens will be augmented to the vocabulary of pretrained Multimodal Language Models (MLMs). With this encoder, we then pack long-term multimodal interactions involving task instructions, memories, thoughts, observations, textual responses, behavior trajectories, etc. into unified token sequences and model them with autoregressive transformers. Thanks to the semantically meaningful behavior tokens, the resulting VLA model, OmniJARVIS, can reason (by producing chain-of-thoughts), plan, answer questions, and act (by producing behavior tokens for the IL policy decoder). OmniJARVIS demonstrates excellent performances on a comprehensive collection of atomic, programmatic, and open-ended tasks in open-world Minecraft. Our analysis further unveils the crucial design principles in interaction data formation, unified tokenization, and its scaling potentials.