olmOCR 2: Unit Test Rewards for Document OCR
Jake Poznanski, Luca Soldaini, Kyle Lo
2025-10-23
Summary
This paper introduces olmOCR 2, a new and improved system for turning scanned documents, like PDFs, into editable text, keeping the text in a natural reading order.
What's the problem?
Converting scanned documents into usable text is hard because the documents often have complex layouts with things like tables, formulas, and multiple columns. Existing systems struggle to accurately recognize the text *and* understand how it's organized on the page, leading to messy and disorganized output. Specifically, older versions of olmOCR weren't great at handling math, tables, or complex page arrangements.
What's the solution?
The researchers created a new system called olmOCR 2, powered by a large 'vision language model' (basically a powerful AI). They trained this AI using a special technique called 'reinforcement learning' where the AI gets rewarded for correctly answering specific tests. To create enough tests, they built a system to automatically generate realistic, but artificial, documents with known layouts and correct answers. This allowed them to train the AI to be much better at recognizing text and understanding document structure.
Why it matters?
This work is important because it significantly improves the accuracy of converting scanned documents into editable text. Better OCR means easier access to information in books, articles, and other printed materials, and it's especially helpful for things like automatically processing scientific papers with complex formulas and tables. Plus, they're making all their tools and data freely available for others to use and build upon.
Abstract
We present olmOCR 2, the latest in our family of powerful OCR systems for converting digitized print documents, like PDFs, into clean, naturally ordered plain text. olmOCR 2 is powered by olmOCR-2-7B-1025, a specialized, 7B vision language model (VLM) trained using reinforcement learning with verifiable rewards (RLVR), where our rewards are a diverse set of binary unit tests. To scale unit test creation, we develop a pipeline for generating synthetic documents with diverse and challenging layouts, known ground-truth HTML source code, and extracted test cases. We show that RL training on these test cases results in state-of-the-art performance on olmOCR-Bench, our English-language OCR benchmark, with the largest improvements in math formula conversion, table parsing, and multi-column layouts compared to previous versions. We release our model, data and code under permissive open licenses.