< Explain other AI papers

Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach

Xuanming Zhang, Yuxuan Chen, Yuan Yuan, Minlie Huang

2024-10-10

Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach

Summary

This paper introduces Seeker, a new framework that uses large language models (LLMs) and multiple intelligent agents to improve how software handles errors, known as exceptions.

What's the problem?

In software development, when programs encounter unexpected issues or errors (called exceptions), it can lead to problems if not handled properly. Many developers struggle with detecting and managing these exceptions, resulting in unreliable code. This issue is especially common in open-source projects, where the quality of the software can suffer due to poor exception handling.

What's the solution?

To address this problem, the authors developed Seeker, a multi-agent framework that includes different agents designed to work together to improve exception handling. These agents include a Scanner to analyze code, a Detector to identify errors, a Predator to search for relevant information, a Ranker to prioritize solutions, and a Handler to implement fixes. By using these agents alongside LLMs, Seeker can effectively detect, capture, and resolve exceptions in code.

Why it matters?

This research is important because it offers a systematic way to enhance the reliability of software by improving how exceptions are handled. By automating parts of the error detection and resolution process, Seeker can help developers create more robust software systems, ultimately leading to better quality applications and a more reliable software ecosystem.

Abstract

In real world software development, improper or missing exception handling can severely impact the robustness and reliability of code. Exception handling mechanisms require developers to detect, capture, and manage exceptions according to high standards, but many developers struggle with these tasks, leading to fragile code. This problem is particularly evident in open source projects and impacts the overall quality of the software ecosystem. To address this challenge, we explore the use of large language models (LLMs) to improve exception handling in code. Through extensive analysis, we identify three key issues: Insensitive Detection of Fragile Code, Inaccurate Capture of Exception Types, and Distorted Handling Solutions. These problems are widespread across real world repositories, suggesting that robust exception handling practices are often overlooked or mishandled. In response, we propose Seeker, a multi agent framework inspired by expert developer strategies for exception handling. Seeker uses agents: Scanner, Detector, Predator, Ranker, and Handler to assist LLMs in detecting, capturing, and resolving exceptions more effectively. Our work is the first systematic study on leveraging LLMs to enhance exception handling practices, providing valuable insights for future improvements in code reliability.