< Explain other AI papers

POLCA: Stochastic Generative Optimization with LLM

Xuanfei Ren, Allen Nie, Tengyang Xie, Ching-An Cheng

2026-03-17

POLCA: Stochastic Generative Optimization with LLM

Summary

This paper introduces a new method, called POLCA, for automatically finding the best settings for complex systems like those powered by large language models. It's like trying to tune a radio to get the clearest signal, but instead of a knob, you're adjusting the instructions given to an AI.

What's the problem?

Currently, improving these complex systems – whether it's crafting the perfect prompt for an AI chatbot or designing a multi-step AI agent – is a slow, manual process that requires a lot of trial and error. The challenge is that these systems are unpredictable; small changes can have big effects, and getting feedback on how well a change works isn't always straightforward or consistent. It's hard to efficiently search for the best possible configuration when things are so uncertain.

What's the solution?

POLCA tackles this problem by treating the search for the best settings as an optimization problem. It uses a language model itself to suggest improvements, learning from both numerical scores (like how well the system performs on a task) and textual feedback. The system keeps track of promising options in a priority queue, focusing on those that seem most likely to succeed. It also uses techniques to ensure a variety of different approaches are explored and to learn from past attempts, making the process more efficient and robust. The authors also mathematically proved that POLCA will find good solutions even with all the uncertainty.

Why it matters?

This work is important because it automates a process that was previously very time-consuming and required significant expertise. By making it easier to optimize complex AI systems, POLCA can lead to better performance, faster development cycles, and wider accessibility to powerful AI technologies. It shows a way to leverage AI to improve AI, which is a crucial step towards building more capable and reliable systems.

Abstract

Optimizing complex systems, ranging from LLM prompts to multi-turn agents, traditionally requires labor-intensive manual iteration. We formalize this challenge as a stochastic generative optimization problem where a generative language model acts as the optimizer, guided by numerical rewards and text feedback to discover the best system. We introduce Prioritized Optimization with Local Contextual Aggregation (POLCA), a scalable framework designed to handle stochasticity in optimization -- such as noisy feedback, sampling minibatches, and stochastic system behaviors -- while effectively managing the unconstrained expansion of solution space. POLCA maintains a priority queue to manage the exploration-exploitation tradeoff, systematically tracking candidate solutions and their evaluation histories. To enhance efficiency, we integrate an varepsilon-Net mechanism to maintain parameter diversity and an LLM Summarizer to perform meta-learning across historical trials. We theoretically prove that POLCA converges to near-optimal candidate solutions under stochasticity. We evaluate our framework on diverse benchmarks, including τ-bench, HotpotQA (agent optimization), VeriBench (code translation) and KernelBench (CUDA kernel generation). Experimental results demonstrate that POLCA achieves robust, sample and time-efficient performance, consistently outperforming state-of-the-art algorithms in both deterministic and stochastic problems. The codebase for this work is publicly available at https://github.com/rlx-lab/POLCA.