< Explain other AI papers

SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion

Ahmed Nassar, Andres Marafioti, Matteo Omenetti, Maksym Lysak, Nikolaos Livathinos, Christoph Auer, Lucas Morin, Rafael Teixeira de Lima, Yusik Kim, A. Said Gurbuz, Michele Dolfi, Miquel Farré, Peter W. J. Staar

2025-03-17

SmolDocling: An ultra-compact vision-language model for end-to-end
  multi-modal document conversion

Summary

This paper introduces SmolDocling, a small but powerful AI model designed to convert documents from images to editable formats.

What's the problem?

Converting documents from images into editable files usually requires large AI models or complex systems with multiple specialized tools. This can be inefficient and resource-intensive.

What's the solution?

SmolDocling uses a single, compact AI model to process entire pages and accurately capture the content, structure, and location of different elements, like text, tables, and images. It uses a new universal markup format called DocTags.

Why it matters?

This work matters because it provides a more efficient and streamlined way to convert documents, making it easier to process a wide variety of document types with limited computational resources.

Abstract

We introduce SmolDocling, an ultra-compact vision-language model targeting end-to-end document conversion. Our model comprehensively processes entire pages by generating DocTags, a new universal markup format that captures all page elements in their full context with location. Unlike existing approaches that rely on large foundational models, or ensemble solutions that rely on handcrafted pipelines of multiple specialized models, SmolDocling offers an end-to-end conversion for accurately capturing content, structure and spatial location of document elements in a 256M parameters vision-language model. SmolDocling exhibits robust performance in correctly reproducing document features such as code listings, tables, equations, charts, lists, and more across a diverse range of document types including business documents, academic papers, technical reports, patents, and forms -- significantly extending beyond the commonly observed focus on scientific papers. Additionally, we contribute novel publicly sourced datasets for charts, tables, equations, and code recognition. Experimental results demonstrate that SmolDocling competes with other Vision Language Models that are up to 27 times larger in size, while reducing computational requirements substantially. The model is currently available, datasets will be publicly available soon.