< Explain other AI papers

Finch: Benchmarking Finance & Accounting across Spreadsheet-Centric Enterprise Workflows

Haoyu Dong, Pengkun Zhang, Yan Gao, Xuanyu Dong, Yilin Cheng, Mingzhe Lu, Adina Yakefu, Shuxin Zheng

2025-12-16

Finch: Benchmarking Finance & Accounting across Spreadsheet-Centric Enterprise Workflows

Summary

This paper introduces a new way to test how well AI can handle complex, real-world finance and accounting tasks, going beyond simple calculations to include things like understanding emails, working with messy spreadsheets, and creating reports.

What's the problem?

Currently, there isn't a good benchmark to accurately measure how AI performs on the kinds of complicated, multi-step jobs that professionals in finance actually do every day. Existing tests are often too simple and don't reflect the real-world 'messiness' of financial data and workflows, like dealing with different file types, incomplete information, and needing to pull data from multiple sources.

What's the solution?

The researchers created 'Finch,' a benchmark built from over 15,000 real spreadsheets and 500,000 emails from companies like Enron. They then carefully constructed 172 realistic workflows, each with multiple tasks, that a finance professional would perform. These workflows were created with the help of AI to initially identify potential tasks, but then were meticulously checked and refined by human experts, taking over 700 hours of work. They then tested several leading AI models like GPT-5.1 and Claude on these workflows.

Why it matters?

This work is important because it shows that even the most advanced AI models still struggle with complex financial tasks. GPT-5.1, for example, took almost two full days to complete the tasks and only succeeded about 38% of the time. This highlights the need for further development in AI to truly assist professionals in fields like finance and accounting, and provides a standard way to measure progress in this area.

Abstract

We introduce a finance & accounting benchmark (Finch) for evaluating AI agents on real-world, enterprise-grade professional workflows -- interleaving data entry, structuring, formatting, web search, cross-file retrieval, calculation, modeling, validation, translation, visualization, and reporting. Finch is sourced from authentic enterprise workspaces at Enron (15,000 spreadsheets and 500,000 emails from 150 employees) and other financial institutions, preserving in-the-wild messiness across multimodal artifacts (text, tables, formulas, charts, code, and images) and spanning diverse domains such as budgeting, trading, and asset management. We propose a workflow construction process that combines LLM-assisted discovery with expert annotation: (1) LLM-assisted, expert-verified derivation of workflows from real-world email threads and version histories of spreadsheet files, and (2) meticulous expert annotation for workflows, requiring over 700 hours of domain-expert effort. This yields 172 composite workflows with 384 tasks, involving 1,710 spreadsheets with 27 million cells, along with PDFs and other artifacts, capturing the intrinsically messy, long-horizon, knowledge-intensive, and collaborative nature of real-world enterprise work. We conduct both human and automated evaluations of frontier AI systems including GPT 5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4, and Qwen 3 Max, and GPT 5.1 Pro spends 48 hours in total yet passes only 38.4% of workflows, while Claude Sonnet 4.5 passes just 25.0%. Comprehensive case studies further surface the challenges that real-world enterprise workflows pose for AI agents.