< Explain other AI papers

Sibyl: Simple yet Effective Agent Framework for Complex Real-world Reasoning

Yulong Wang, Tianhao Shen, Lifeng Liu, Jian Xie

2024-07-17

Sibyl: Simple yet Effective Agent Framework for Complex Real-world Reasoning

Summary

This paper introduces Sibyl, a new framework designed to enhance the reasoning abilities of large language models (LLMs) when tackling complex real-world problems.

What's the problem?

While existing LLMs are good at solving problems, they often struggle with long-term reasoning and don’t fully utilize the tools available to them. This can lead to mistakes or oversights in complex situations that require deep thinking, which is important for tasks that humans usually handle quickly but are challenging for AI.

What's the solution?

Sibyl addresses these issues by using a simple yet effective design that includes a global workspace for sharing information and a multi-agent debate system to refine answers. It breaks down tasks into manageable parts and allows different agents to discuss and improve their responses before giving a final answer. This helps the system think more like humans do, moving from quick, instinctive responses (System-1 thinking) to more careful, deliberate reasoning (System-2 thinking). The framework is also built to be easy to debug and integrate into other applications.

Why it matters?

This research is significant because it aims to improve how AI systems reason about complex problems, making them more reliable and effective in real-world scenarios. By enhancing the capabilities of LLMs through Sibyl, we could see advancements in various fields where detailed reasoning is crucial, such as healthcare, law, and scientific research.

Abstract

Existing agents based on large language models (LLMs) demonstrate robust problem-solving capabilities by integrating LLMs' inherent knowledge, strong in-context learning and zero-shot capabilities, and the use of tools combined with intricately designed LLM invocation workflows by humans. However, these agents still exhibit shortcomings in long-term reasoning and under-use the potential of existing tools, leading to noticeable deficiencies in complex real-world reasoning scenarios. To address these limitations, we introduce Sibyl, a simple yet powerful LLM-based agent framework designed to tackle complex reasoning tasks by efficiently leveraging a minimal set of tools. Drawing inspiration from Global Workspace Theory, Sibyl incorporates a global workspace to enhance the management and sharing of knowledge and conversation history throughout the system. Furthermore, guided by Society of Mind Theory, Sibyl implements a multi-agent debate-based jury to self-refine the final answers, ensuring a comprehensive and balanced approach. This approach aims to reduce system complexity while expanding the scope of problems solvable-from matters typically resolved by humans in minutes to those requiring hours or even days, thus facilitating a shift from System-1 to System-2 thinking. Sibyl has been designed with a focus on scalability and ease of debugging by incorporating the concept of reentrancy from functional programming from its inception, with the aim of seamless and low effort integration in other LLM applications to improve capabilities. Our experimental results on the GAIA benchmark test set reveal that the Sibyl agent instantiated with GPT-4 achieves state-of-the-art performance with an average score of 34.55%, compared to other agents based on GPT-4. We hope that Sibyl can inspire more reliable and reusable LLM-based agent solutions to address complex real-world reasoning tasks.