< Explain other AI papers

JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse

Muyao Li, Zihao Wang, Kaichen He, Xiaojian Ma, Yitao Liang

2025-03-21

JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play
  Visual Games with Keyboards and Mouse

Summary

This paper explores how to make AI better at playing video games by teaching it to understand what it sees and follow instructions using a keyboard and mouse.

What's the problem?

AI models that can understand both images and language often need improvement in their ability to make decisions and act in video game environments.

What's the solution?

The researchers developed a method to train AI models using visual and linguistic guidance, which helps them learn how to play games like Minecraft by following human instructions.

Why it matters?

This work matters because it can lead to AI that is better at interacting with complex environments and performing tasks based on what it sees and is told.

Abstract

Recently, action-based decision-making in open-world environments has gained significant attention. Visual Language Action (VLA) models, pretrained on large-scale web datasets, have shown promise in decision-making tasks. However, previous work has primarily focused on action post-training, often neglecting enhancements to the foundational model itself. In response, we introduce a novel approach, Act from Visual Language Post-Training, which refines Visual Language Models (VLMs) through visual and linguistic guidance in a self-supervised manner. This enhancement improves the models' capabilities in world knowledge, visual recognition, and spatial grounding in open-world environments. Following the above post-training paradigms, we obtain the first VLA models in Minecraft that can follow human instructions on over 1k different atomic tasks, including crafting, smelting, cooking, mining, and killing. Our experiments demonstrate that post-training on non-trajectory tasks leads to a significant 40% improvement over the best agent baseline on a diverse set of atomic tasks. Furthermore, we demonstrate that our approach surpasses traditional imitation learning-based policies in Minecraft, achieving state-of-the-art performance. We have open-sourced the code, models, and datasets to foster further research. The project page can be found in https://craftjarvis.github.io/JarvisVLA.