< Explain other AI papers

TabReD: A Benchmark of Tabular Machine Learning in-the-Wild

Ivan Rubachev, Nikolay Kartashev, Yury Gorishniy, Artem Babenko

2024-07-04

TabReD: A Benchmark of Tabular Machine Learning in-the-Wild

Summary

This paper talks about TabReD, a new benchmark designed to improve how we evaluate machine learning models that work with tabular data, which is data organized in tables like spreadsheets.

What's the problem?

The main problem is that existing benchmarks used for testing machine learning models often do not accurately reflect real-world situations. Specifically, they usually ignore two important factors: first, that data can change over time, which affects how well models perform; and second, that real-world datasets often have been carefully prepared with many features that help in making predictions, but academic datasets may not have this level of detail.

What's the solution?

To address these issues, the authors created TabReD, which includes eight real-world datasets from various fields like finance and food delivery. These datasets are designed with time-based splits for training and testing, meaning that the model is trained on older data and tested on newer data to better simulate real-world conditions. The researchers found that when using these time-based splits, the performance rankings of different models changed compared to traditional random splits. They discovered that simpler models like Multi-Layer Perceptrons (MLPs) and Gradient Boosted Decision Trees (GBDT) performed better than more complex deep learning models in this setting.

Why it matters?

This research is important because it helps bridge the gap between academic research and practical applications in industry. By providing a more realistic way to evaluate machine learning models on tabular data, TabReD can lead to better model development and deployment in real-world scenarios, ultimately improving decision-making processes in various fields.

Abstract

Benchmarks that closely reflect downstream application scenarios are essential for the streamlined adoption of new research in tabular machine learning (ML). In this work, we examine existing tabular benchmarks and find two common characteristics of industry-grade tabular data that are underrepresented in the datasets available to the academic community. First, tabular data often changes over time in real-world deployment scenarios. This impacts model performance and requires time-based train and test splits for correct model evaluation. Yet, existing academic tabular datasets often lack timestamp metadata to enable such evaluation. Second, a considerable portion of datasets in production settings stem from extensive data acquisition and feature engineering pipelines. For each specific dataset, this can have a different impact on the absolute and relative number of predictive, uninformative, and correlated features, which in turn can affect model selection. To fill the aforementioned gaps in academic benchmarks, we introduce TabReD -- a collection of eight industry-grade tabular datasets covering a wide range of domains from finance to food delivery services. We assess a large number of tabular ML models in the feature-rich, temporally-evolving data setting facilitated by TabReD. We demonstrate that evaluation on time-based data splits leads to different methods ranking, compared to evaluation on random splits more common in academic benchmarks. Furthermore, on the TabReD datasets, MLP-like architectures and GBDT show the best results, while more sophisticated DL models are yet to prove their effectiveness.