GigaEvo: An Open Source Optimization Framework Powered By LLMs And Evolution Algorithms
Valentin Khrulkov, Andrey Galichin, Denis Bashkirov, Dmitry Vinichenko, Oleg Travkin, Roman Alferov, Andrey Kuznetsov, Ivan Oseledets
2025-11-26
Summary
This paper introduces GigaEvo, a new software framework designed to help researchers explore how large language models (LLMs) can be combined with evolutionary algorithms to solve complex problems.
What's the problem?
Recent work, like AlphaEvolve, has shown that using LLMs to guide the process of evolution can lead to impressive results in areas like math and optimization. However, the details of *how* these systems were built aren't fully shared, making it difficult for other scientists to verify the findings or build upon them. It's hard to reproduce the results and experiment with new ideas when you don't know exactly how things work under the hood.
What's the solution?
The authors created GigaEvo, an open-source framework that recreates and expands on the ideas behind AlphaEvolve. It's built with separate, interchangeable parts for things like choosing good solutions (MAP-Elites), running calculations quickly (asynchronous pipelines), using LLMs to suggest changes to solutions (mutation operators), keeping track of how solutions evolve (lineage tracking), and exploring many possibilities at once (multi-island strategies). They tested GigaEvo on the same challenging problems AlphaEvolve tackled – arranging triangles, packing circles, and finding optimal arrangements in high dimensions – to confirm it works correctly.
Why it matters?
GigaEvo is important because it provides a clear, well-documented, and readily available tool for researchers interested in LLM-guided evolution. By making the implementation details open and easy to modify, it encourages further investigation and innovation in this exciting field, ultimately accelerating progress in using AI to solve difficult problems.
Abstract
Recent advances in LLM-guided evolutionary computation, particularly AlphaEvolve (Novikov et al., 2025; Georgiev et al., 2025), have demonstrated remarkable success in discovering novel mathematical constructions and solving challenging optimization problems. However, the high-level descriptions in published work leave many implementation details unspecified, hindering reproducibility and further research. In this report we present GigaEvo, an extensible open-source framework that enables researchers to study and experiment with hybrid LLM-evolution approaches inspired by AlphaEvolve. Our system provides modular implementations of key components: MAP-Elites quality-diversity algorithms, asynchronous DAG-based evaluation pipelines, LLM-driven mutation operators with insight generation and bidirectional lineage tracking, and flexible multi-island evolutionary strategies. In order to assess reproducibility and validate our implementation we evaluate GigaEvo on challenging problems from the AlphaEvolve paper: Heilbronn triangle placement, circle packing in squares, and high-dimensional kissing numbers. The framework emphasizes modularity, concurrency, and ease of experimentation, enabling rapid prototyping through declarative configuration. We provide detailed descriptions of system architecture, implementation decisions, and experimental methodology to support further research in LLM driven evolutionary methods. The GigaEvo framework and all experimental code are available at https://github.com/AIRI-Institute/gigaevo-core.