Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey
Liang Chen, Zekun Wang, Shuhuai Ren, Lei Li, Haozhe Zhao, Yunshui Li, Zefan Cai, Hongcheng Guo, Lei Zhang, Yizhe Xiong, Yichi Zhang, Ruoyu Wu, Qingxiu Dong, Ge Zhang, Jian Yang, Lingwei Meng, Shujie Hu, Yulong Chen, Junyang Lin, Shuai Bai, Andreas Vlachos, Xu Tan
2024-12-30

Summary
This paper talks about Next Token Prediction (NTP), a method that helps machines understand and generate information from different types of data, like text, images, and audio, by predicting what comes next in a sequence.
What's the problem?
While machine learning has made great strides in understanding language, most models have focused on text alone. However, many real-world applications require understanding multiple types of data at once (like combining text and images). Current methods often struggle to effectively process this multimodal information, which limits their usefulness in tasks that require both understanding and generating content.
What's the solution?
The authors propose a comprehensive survey that explores how NTP can be applied to multimodal tasks. They introduce a new classification system (taxonomy) that organizes different aspects of NTP, including how to convert various types of data into tokens (small units of information), the architecture of models that use NTP, and the challenges researchers face. This framework aims to help researchers better understand how to develop systems that can handle multiple data types effectively.
Why it matters?
This research is important because it lays the groundwork for advancing multimodal intelligence, which combines different forms of data. By improving how machines understand and generate content across various modalities, we can create more powerful AI applications that are capable of performing complex tasks in areas like healthcare, education, and entertainment.
Abstract
Building on the foundations of language modeling in natural language processing, Next Token Prediction (NTP) has evolved into a versatile training objective for machine learning tasks across various modalities, achieving considerable success. As Large Language Models (LLMs) have advanced to unify understanding and generation tasks within the textual modality, recent research has shown that tasks from different modalities can also be effectively encapsulated within the NTP framework, transforming the multimodal information into tokens and predict the next one given the context. This survey introduces a comprehensive taxonomy that unifies both understanding and generation within multimodal learning through the lens of NTP. The proposed taxonomy covers five key aspects: Multimodal tokenization, MMNTP model architectures, unified task representation, datasets \& evaluation, and open challenges. This new taxonomy aims to aid researchers in their exploration of multimodal intelligence. An associated GitHub repository collecting the latest papers and repos is available at https://github.com/LMM101/Awesome-Multimodal-Next-Token-Prediction