< Explain other AI papers

CORAL: Towards Autonomous Multi-Agent Evolution for Open-Ended Discovery

Ao Qu, Han Zheng, Zijian Zhou, Yihao Yan, Yihong Tang, Shao Yong Ong, Fenglu Hong, Kaichen Zhou, Chonghe Jiang, Minwei Kong, Jiacheng Zhu, Xuan Jiang, Sirui Li, Cathy Wu, Bryan Kian Hsiang Low, Jinhua Zhao, Paul Pu Liang

2026-04-03

CORAL: Towards Autonomous Multi-Agent Evolution for Open-Ended Discovery

Summary

This paper introduces CORAL, a new system for letting large language models (LLMs) automatically improve themselves over time, especially when tackling complex problems that require a lot of trial and error and building on past knowledge.

What's the problem?

Current methods for evolving LLMs rely on pre-set rules and instructions that humans create. This limits how independent the LLMs can be and how well they can truly explore and discover new solutions on their own. It's like giving a student very specific instructions instead of letting them figure things out through experimentation and collaboration.

What's the solution?

CORAL tackles this by creating a system where multiple LLM 'agents' work together. These agents continuously explore, think about what they've learned, and share information through a shared memory. The system runs for a long time, allowing the agents to build on each other's progress. It also includes safety features to prevent things from going wrong, like keeping each agent in its own workspace and managing resources. Essentially, it's a self-improving team of AI agents.

Why it matters?

This research is important because it shows that giving LLMs more autonomy and letting them collaborate can lead to significantly better results in solving difficult problems. The experiments demonstrate that CORAL outperforms existing methods, achieving faster improvements with less effort. This suggests a path towards creating AI systems that can truly learn and discover new things without constant human intervention, which is a big step towards more advanced and capable AI.

Abstract

Large language model (LLM)-based evolution is a promising approach for open-ended discovery, where progress requires sustained search and knowledge accumulation. Existing methods still rely heavily on fixed heuristics and hard-coded exploration rules, which limit the autonomy of LLM agents. We present CORAL, the first framework for autonomous multi-agent evolution on open-ended problems. CORAL replaces rigid control with long-running agents that explore, reflect, and collaborate through shared persistent memory, asynchronous multi-agent execution, and heartbeat-based interventions. It also provides practical safeguards, including isolated workspaces, evaluator separation, resource management, and agent session and health management. Evaluated on diverse mathematical, algorithmic, and systems optimization tasks, CORAL sets new state-of-the-art results on 10 tasks, achieving 3-10 times higher improvement rates with far fewer evaluations than fixed evolutionary search baselines across tasks. On Anthropic's kernel engineering task, four co-evolving agents improve the best known score from 1363 to 1103 cycles. Mechanistic analyses further show how these gains arise from knowledge reuse and multi-agent exploration and communication. Together, these results suggest that greater agent autonomy and multi-agent evolution can substantially improve open-ended discovery. Code is available at https://github.com/Human-Agent-Society/CORAL.