Evolving Prompts In-Context: An Open-ended, Self-replicating Perspective
Jianyu Wang, Zhiqiang Hu, Lidong Bing
2025-07-01
Summary
This paper talks about PromptQuine, a new way to automatically improve the prompts given to large language models by using an evolutionary search process to remove unhelpful parts and keep the most effective ones.
What's the problem?
Creating good prompts for language models is difficult, especially when there are only a few examples to guide the model, which can lead to poor model performance and wasted resources.
What's the solution?
PromptQuine uses a method that mimics evolution by gradually pruning random initial prompts to find smaller and stronger versions. This helps improve the model’s responses in tasks even when there isn’t much data, and it works efficiently while scaling well to more examples.
Why it matters?
This matters because it makes it easier to get better results from language models with less manual work, especially in situations where data is limited or expensive to collect.
Abstract
A novel prompt optimization framework, PromptQuine, improves LLM performance by pruning random demonstrations into effective prompts using evolutionary search in low-data regimes.