< Explain other AI papers

ModelTables: A Corpus of Tables about Models

Zhengyuan Dong, Victor Zhong, Renée J. Miller

2025-12-19

ModelTables: A Corpus of Tables about Models

Summary

This paper introduces ModelTables, a new collection of tables specifically designed to help understand and search information about AI models. These tables aren't just random data; they come from places like model descriptions, code repositories, and research papers, and they focus on how well models perform and how they're configured.

What's the problem?

Currently, finding specific information about AI models, especially details found in tables about their performance or settings, is difficult. Existing methods for searching large datasets often miss the important relationships between these tables and the models they describe. It's like trying to find a specific piece of information in a huge, disorganized spreadsheet – it's hard to know where to look and how things connect.

What's the solution?

The researchers created ModelTables, a dataset containing over 90,000 tables linked to over 60,000 AI models. They then tested different search methods on this dataset, comparing how well they could find relevant tables when given a query. They used three ways to verify if a search result was correct: checking if papers cited the model, if the model description directly linked to the table, or if the model and table used the same training data. They compared standard search techniques with more advanced methods that understand the meaning of the data.

Why it matters?

This work is important because it provides a standardized way to evaluate how well we can search for and understand information about AI models. By creating this benchmark, the researchers hope to encourage the development of better search tools that can help people quickly find the details they need about different models, leading to more informed decisions and faster progress in the field of artificial intelligence.

Abstract

We present ModelTables, a benchmark of tables in Model Lakes that captures the structured semantics of performance and configuration tables often overlooked by text only retrieval. The corpus is built from Hugging Face model cards, GitHub READMEs, and referenced papers, linking each table to its surrounding model and publication context. Compared with open data lake tables, model tables are smaller yet exhibit denser inter table relationships, reflecting tightly coupled model and benchmark evolution. The current release covers over 60K models and 90K tables. To evaluate model and table relatedness, we construct a multi source ground truth using three complementary signals: (1) paper citation links, (2) explicit model card links and inheritance, and (3) shared training datasets. We present one extensive empirical use case for the benchmark which is table search. We compare canonical Data Lake search operators (unionable, joinable, keyword) and Information Retrieval baselines (dense, sparse, hybrid retrieval) on this benchmark. Union based semantic table retrieval attains 54.8 % P@1 overall (54.6 % on citation, 31.3 % on inheritance, 30.6 % on shared dataset signals); table based dense retrieval reaches 66.5 % P@1, and metadata hybrid retrieval achieves 54.1 %. This evaluation indicates clear room for developing better table search methods. By releasing ModelTables and its creation protocol, we provide the first large scale benchmark of structured data describing AI model. Our use case of table discovery in Model Lakes, provides intuition and evidence for developing more accurate semantic retrieval, structured comparison, and principled organization of structured model knowledge. Source code, data, and other artifacts have been made available at https://github.com/RJMillerLab/ModelTables.