< Explain other AI papers

Qilin: A Multimodal Information Retrieval Dataset with APP-level User Sessions

Jia Chen, Qian Dong, Haitao Li, Xiaohui He, Yan Gao, Shaosheng Cao, Yi Wu, Ping Yang, Chen Xu, Yao Hu, Qingyao Ai, Yiqun Liu

2025-03-04

Qilin: A Multimodal Information Retrieval Dataset with APP-level User
  Sessions

Summary

This paper talks about Qilin, a new dataset that captures how people search for and interact with different types of content (like text, images, and videos) on their phones, specifically from a popular Chinese social media app called Xiaohongshu.

What's the problem?

Researchers and companies want to make search and recommendation systems better, especially for apps that use different types of content like pictures and videos along with text. However, they don't have enough good data to work with, which makes it hard to improve these systems.

What's the solution?

The researchers created Qilin, a large collection of real user data from Xiaohongshu. This dataset includes information about how users search, what results they see (including pictures, videos, and text), and how they interact with these results. It also captures special features like direct answers to questions and how users respond to them. This gives researchers a complete picture of how people use complex search and recommendation systems in real life.

Why it matters?

This matters because it can help make the apps and websites we use every day much better at showing us what we're looking for. By studying how real people search for and interact with different types of content, researchers can create smarter systems that understand what we want more accurately. This could lead to more helpful search results, better recommendations, and overall improved experiences when we use social media, shopping apps, or any platform that combines text, images, and videos.

Abstract

User-generated content (UGC) communities, especially those featuring multimodal content, improve user experiences by integrating visual and textual information into results (or items). The challenge of improving user experiences in complex systems with search and recommendation (S\&R) services has drawn significant attention from both academia and industry these years. However, the lack of high-quality datasets has limited the research progress on multimodal S\&R. To address the growing need for developing better S\&R services, we present a novel multimodal information retrieval dataset in this paper, namely Qilin. The dataset is collected from Xiaohongshu, a popular social platform with over 300 million monthly active users and an average search penetration rate of over 70\%. In contrast to existing datasets, Qilin offers a comprehensive collection of user sessions with heterogeneous results like image-text notes, video notes, commercial notes, and direct answers, facilitating the development of advanced multimodal neural retrieval models across diverse task settings. To better model user satisfaction and support the analysis of heterogeneous user behaviors, we also collect extensive APP-level contextual signals and genuine user feedback. Notably, Qilin contains user-favored answers and their referred results for search requests triggering the Deep Query Answering (DQA) module. This allows not only the training \& evaluation of a Retrieval-augmented Generation (RAG) pipeline, but also the exploration of how such a module would affect users' search behavior. Through comprehensive analysis and experiments, we provide interesting findings and insights for further improving S\&R systems. We hope that Qilin will significantly contribute to the advancement of multimodal content platforms with S\&R services in the future.