SAIL-Embedding Technical Report: Omni-modal Embedding Foundation Model
Lin Lin, Jiefeng Long, Zhihe Wan, Yuchi Wang, Dingkang Yang, Shuang Yang, Yueyang Yao, Xu Chen, Zirui Guo, Shengqiang Li, Weiran Li, Hanyu Li, Yaling Mou, Yan Qiu, Haiyang Yu, Xiao Liang, Hongsheng Li, Chao Feng
2025-10-15
Summary
This paper introduces SAIL-Embedding, a new type of AI model designed to understand and connect information from different sources like text, images, and user behavior data. It aims to create a single, powerful representation of information that can be used for various tasks, especially in recommendation systems.
What's the problem?
Existing AI models that try to combine different types of data often struggle with a few key issues. They might not work with all kinds of data, can be difficult to train consistently, and don't always perform well when applied to real-world business problems like recommending products or content. Specifically, they lack flexibility and don't always understand the nuances of how people interact with things online.
What's the solution?
The researchers developed SAIL-Embedding using a clever training process. First, they used a 'content-aware progressive training' method to help the model learn to understand different types of information and how they relate to each other. Then, they used 'collaboration-aware recommendation enhancement training' to specifically improve the model's ability to make good recommendations by learning from user history and patterns. They also added techniques to make the training process more adaptable and reliable, ensuring the model works well in different situations.
Why it matters?
This work is important because it improves the performance of recommendation systems, which are used everywhere online. The experiments showed that SAIL-Embedding led to a noticeable increase in 'Lifetime' – how long users continue to engage with a platform – and improved the accuracy of content ranking. This means better recommendations, happier users, and potentially increased revenue for companies like Douyin (TikTok in China).
Abstract
Multimodal embedding models aim to yield informative unified representations that empower diverse cross-modal tasks. Despite promising developments in the evolution from CLIP-based dual-tower architectures to large vision-language models, prior works still face unavoidable challenges in real-world applications and business scenarios, such as the limited modality support, unstable training mechanisms, and industrial domain gaps. In this work, we introduce SAIL-Embedding, an omni-modal embedding foundation model that addresses these issues through tailored training strategies and architectural design. In the optimization procedure, we propose a multi-stage training scheme to boost the multifaceted effectiveness of representation learning. Specifically, the content-aware progressive training aims to enhance the model's adaptability to diverse downstream tasks and master enriched cross-modal proficiency. The collaboration-aware recommendation enhancement training further adapts multimodal representations for recommendation scenarios by distilling knowledge from sequence-to-item and ID-to-item embeddings while mining user historical interests. Concurrently, we develop the stochastic specialization and dataset-driven pattern matching to strengthen model training flexibility and generalizability. Experimental results show that SAIL-Embedding achieves SOTA performance compared to other methods in different retrieval tasks. In online experiments across various real-world scenarios integrated with our model, we observe a significant increase in Lifetime (LT), which is a crucial indicator for the recommendation experience. For instance, the model delivers the 7-day LT gain of +0.158% and the 14-day LT gain of +0.144% in the Douyin-Selected scenario. For the Douyin feed rank model, the match features produced by SAIL-Embedding yield a +0.08% AUC gain.