PM-LLM-Benchmark: Evaluating Large Language Models on Process Mining Tasks
Alessandro Berti, Humam Kourani, Wil M. P. van der Aalst
2024-07-19

Summary
This paper introduces PM-LLM-Benchmark, a new tool designed to evaluate how well large language models (LLMs) can perform tasks related to process mining, which involves analyzing business processes to improve efficiency.
What's the problem?
As businesses generate a lot of data from their processes, there is a need for tools that can analyze this data effectively. While some commercial LLMs can handle basic analytics tasks, it is unclear how well open-source LLMs perform in process mining tasks. Additionally, there are challenges in creating a benchmark that accurately assesses the capabilities of these models due to issues like data availability and evaluation biases.
What's the solution?
The authors developed PM-LLM-Benchmark to assess the performance of various LLMs on specific process mining tasks. This benchmark focuses on both general knowledge about process mining and the specific requirements of different tasks. They tested several prominent LLMs and found that while many could perform some tasks satisfactorily, smaller models struggled with more complex ones. The benchmark also highlights the need for further research to address evaluation biases and improve the ranking of LLMs.
Why it matters?
This research is important because it helps identify which LLMs are effective for process mining, a critical area for businesses looking to optimize their operations. By establishing a comprehensive benchmark, the study paves the way for future advancements in using AI for analyzing business processes, ultimately helping organizations make better data-driven decisions.
Abstract
Large Language Models (LLMs) have the potential to semi-automate some process mining (PM) analyses. While commercial models are already adequate for many analytics tasks, the competitive level of open-source LLMs in PM tasks is unknown. In this paper, we propose PM-LLM-Benchmark, the first comprehensive benchmark for PM focusing on domain knowledge (process-mining-specific and process-specific) and on different implementation strategies. We focus also on the challenges in creating such a benchmark, related to the public availability of the data and on evaluation biases by the LLMs. Overall, we observe that most of the considered LLMs can perform some process mining tasks at a satisfactory level, but tiny models that would run on edge devices are still inadequate. We also conclude that while the proposed benchmark is useful for identifying LLMs that are adequate for process mining tasks, further research is needed to overcome the evaluation biases and perform a more thorough ranking of the competitive LLMs.