TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models
Aditya Tanna, Pratinav Seth, Mohamed Bouadi, Utsav Avaiya, Vinay Kumar Sankarapu
2025-11-06
Summary
This paper introduces TabTune, a new software library designed to make it easier to work with and compare powerful 'foundation models' specifically for data organized in tables, like spreadsheets or databases.
What's the problem?
Currently, using these advanced tabular models is difficult because each one requires different data preparation steps, uses different coding interfaces, and lacks clear standards for how to fine-tune them or measure their real-world performance beyond simple accuracy. Things like ensuring the model's predictions are well-calibrated (meaning they reflect the true likelihood of an outcome) and are fair across different groups aren't consistently addressed.
What's the solution?
TabTune solves this by providing a single, unified system. It gives you access to several state-of-the-art tabular models and automatically handles the necessary data preprocessing for each one. It also supports different ways to adapt these models to your specific task, from using them directly to more advanced fine-tuning techniques. Importantly, TabTune includes tools to evaluate not just how well the model predicts, but also how reliable and fair those predictions are.
Why it matters?
This work is important because it lowers the barrier to entry for using these powerful tabular models. By standardizing the process, TabTune makes it easier for researchers and practitioners to compare different models and adaptation strategies, ultimately leading to better and more trustworthy applications of machine learning to structured data.
Abstract
Tabular foundation models represent a growing paradigm in structured data learning, extending the benefits of large-scale pretraining to tabular domains. However, their adoption remains limited due to heterogeneous preprocessing pipelines, fragmented APIs, inconsistent fine-tuning procedures, and the absence of standardized evaluation for deployment-oriented metrics such as calibration and fairness. We present TabTune, a unified library that standardizes the complete workflow for tabular foundation models through a single interface. TabTune provides consistent access to seven state-of-the-art models supporting multiple adaptation strategies, including zero-shot inference, meta-learning, supervised fine-tuning (SFT), and parameter-efficient fine-tuning (PEFT). The framework automates model-aware preprocessing, manages architectural heterogeneity internally, and integrates evaluation modules for performance, calibration, and fairness. Designed for extensibility and reproducibility, TabTune enables consistent benchmarking of adaptation strategies of tabular foundation models. The library is open source and available at https://github.com/Lexsi-Labs/TabTune .