Wasm: A Pipeline for Constructing Structured Arabic Interleaved Multimodal Corpora
Khalil Hennara, Ahmad Bastati, Muhammad Hreden, Mohamed Motasim Hamed, Zeina Aldallal, Sara Chrouf, Safwan AlModhayan
2025-11-12
Summary
This paper focuses on creating a new dataset for training artificial intelligence models that can understand both Arabic text and images, similar to how models like ChatGPT understand information. It addresses the challenge of limited high-quality Arabic data for this purpose.
What's the problem?
Large language models and multimodal models (which handle text and images) need a lot of data to learn effectively. While there's progress in this area for languages like English, Arabic lags behind because there aren't enough good datasets that preserve the original structure of documents – things like headings, paragraphs, and how images are placed within the text. Existing Arabic datasets mostly just grab the text, losing important context.
What's the solution?
The researchers developed a system called Wasm to process a massive collection of web pages (Common Crawl) and automatically create a new Arabic dataset. This system is designed to keep the original formatting of the web pages, including the arrangement of text and images, and outputs the data in a flexible markdown format. They also compared their method to how other popular datasets are created, explaining why they made specific choices in their design.
Why it matters?
This work is important because it provides a much-needed resource for improving AI models that work with the Arabic language. By releasing both the dataset and the tools used to create it, the researchers are helping to accelerate progress in Arabic language processing and multimodal AI, potentially leading to better Arabic-speaking chatbots, image captioning, and other applications.
Abstract
The performance of large language models (LLMs) and large multimodal models (LMMs) depends heavily on the quality and scale of their pre-training datasets. Recent research shows that large multimodal models trained on natural documents where images and text are interleaved outperform those trained only on image-text pairs across a wide range of benchmarks, leveraging advanced pre- trained models to enforce semantic alignment, image-sequence consistency, and textual coherence. For Arabic, however, the lack of high-quality multimodal datasets that preserve document structure has limited progress. In this paper, we present our pipeline Wasm for processing the Common Crawl dataset to create a new Arabic multimodal dataset that uniquely provides markdown output. Unlike existing Arabic corpora that focus solely on text extraction, our approach preserves the structural integrity of web content while maintaining flexibility for both text-only and multimodal pre-training scenarios. We provide a comprehensive comparative analysis of our data processing pipeline against those used for major existing datasets, highlighting the convergences in filtering strategies and justifying our specific design choices. To support future research, we publicly release a representative dataset dump along with the multimodal processing pipeline for Arabic.