From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language Models
Ziyan Kuang, Feiyu Zhu, Maowei Jiang, Yanzhao Lai, Zelin Wang, Zhitong Wang, Meikang Qiu, Jiajia Huang, Min Peng, Qianqian Xie, Sophia Ananiadou
2025-08-21
Summary
This paper discusses how current ways of testing big language models (LLMs) for finance are not good enough because they only give one score and don't check if the models understand different financial topics well. The paper introduces a new way to test these models called FinCDM, which looks at specific skills and knowledge, and a new dataset called CPA-QKA based on the Certified Public Accountant exam to make these tests more thorough. Experiments showed this new method can find problems in LLMs that older tests missed, like understanding taxes and regulations, and helps us understand how different models behave.
What's the problem?
Existing methods for testing Large Language Models (LLMs) in finance give a single score that doesn't tell us what the models actually know or where they make mistakes. These tests also don't cover enough of the important financial topics needed for real-world use, focusing only on a small part of the financial world.
What's the solution?
To fix these issues, the paper created FinCDM, a new framework for evaluating financial LLMs that focuses on specific knowledge and skills. They also built CPA-QKA, a dataset designed with financial experts using questions from the Certified Public Accountant exam, which covers a wider range of financial skills and is carefully labeled to show what knowledge is being tested. This allows for a more detailed understanding of an LLM's capabilities.
Why it matters?
This new testing approach is important because it provides a more accurate and detailed way to evaluate LLMs for financial jobs, which are very important. By understanding exactly what skills and knowledge LLMs have or lack, developers can build better and more reliable AI tools for finance. This leads to more trustworthy AI and helps guide future improvements in the field, with the goal of making all the testing tools publicly available for others to use.
Abstract
Large Language Models (LLMs) have shown promise for financial applications, yet their suitability for this high-stakes domain remains largely unproven due to inadequacies in existing benchmarks. Existing benchmarks solely rely on score-level evaluation, summarizing performance with a single score that obscures the nuanced understanding of what models truly know and their precise limitations. They also rely on datasets that cover only a narrow subset of financial concepts, while overlooking other essentials for real-world applications. To address these gaps, we introduce FinCDM, the first cognitive diagnosis evaluation framework tailored for financial LLMs, enabling the evaluation of LLMs at the knowledge-skill level, identifying what financial skills and knowledge they have or lack based on their response patterns across skill-tagged tasks, rather than a single aggregated number. We construct CPA-QKA, the first cognitively informed financial evaluation dataset derived from the Certified Public Accountant (CPA) examination, with comprehensive coverage of real-world accounting and financial skills. It is rigorously annotated by domain experts, who author, validate, and annotate questions with high inter-annotator agreement and fine-grained knowledge labels. Our extensive experiments on 30 proprietary, open-source, and domain-specific LLMs show that FinCDM reveals hidden knowledge gaps, identifies under-tested areas such as tax and regulatory reasoning overlooked by traditional benchmarks, and uncovers behavioral clusters among models. FinCDM introduces a new paradigm for financial LLM evaluation by enabling interpretable, skill-aware diagnosis that supports more trustworthy and targeted model development, and all datasets and evaluation scripts will be publicly released to support further research.