< Explain other AI papers

PromptBridge: Cross-Model Prompt Transfer for Large Language Models

Yaxuan Wang, Quan Liu, Zhenting Wang, Zichao Li, Wei Wei, Yang Liu, Yujia Bao

2025-12-02

PromptBridge: Cross-Model Prompt Transfer for Large Language Models

Summary

This paper focuses on the issue of 'model drifting,' which happens when prompts designed for one large language model (LLM) don't work well on other LLMs. Because LLMs are constantly changing and improving, and people often switch between them based on cost or performance, it's a big problem to constantly rewrite prompts for each new model.

What's the problem?

The core problem is that prompts are very sensitive to the specific LLM they're used with. A prompt that works great with, say, GPT-4, might give terrible results with Llama 2. This 'model drifting' means that when you switch LLMs, you usually have to spend a lot of time and effort re-optimizing your prompts for the new model, which is inefficient and costly. The paper demonstrates this is a widespread and significant issue.

What's the solution?

The researchers developed a system called PromptBridge to solve this. It doesn't require any new training of the LLMs themselves. Instead, PromptBridge learns how prompts need to be *changed* when moving between different models. It starts by figuring out the best prompts for both the original and new models on a small set of example tasks. Then, it learns a 'mapping' that can automatically translate prompts from the original model to the new one, even for tasks it hasn't seen before. This allows for effective prompt transfer without constant re-optimization.

Why it matters?

This work is important because it makes using LLMs much more practical. Switching between models becomes easier and less expensive, as you don't need to completely rework your prompts every time. This is especially valuable in complex applications like coding or automated workflows where prompts are a critical part of the system. It reduces the effort needed to take advantage of new and improved LLMs as they become available.

Abstract

Large language models (LLMs) underpin applications in code generation, mathematical reasoning, and agent-based workflows. In practice, systems access LLMs via commercial APIs or open-source deployments, and the model landscape (e.g., GPT, Claude, Llama) evolves rapidly. This rapid evolution forces frequent model switches driven by capability, cost, deployment constraints, and privacy. Yet prompts are highly model-sensitive: reusing a prompt engineered for one model on another often yields substantially worse performance than a prompt optimized for the target model. We term this phenomenon Model Drifting. Through extensive empirical analysis across diverse LLM configurations, we show that model drifting is both common and severe. To address this challenge, we introduce PromptBridge, a training-free framework that preserves prompt effectiveness under model switches, enabling cross-model prompt transfer without costly per-task or per-model re-optimization. PromptBridge requires only a small set of alignment tasks for calibration. It first applies Model-Adaptive Reflective Prompt Evolution (MAP-RPE) to obtain task- and model-specific optimal prompts via iterative reflective refinement and quantitative evaluation. Using the resulting calibrated prompt pairs for the source and target models, PromptBridge learns a cross-model prompt mapping. At test time, i.e., for an unseen task, given a source-model prompt, this mapping directly produces an optimized prompt for the target model. Experiments in single-agent and multi-agent settings show that PromptBridge consistently improves downstream accuracy while reducing migration effort. The code will be available soon.