< Explain other AI papers

φ-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation

Fangzhi Xu, Hang Yan, Chang Ma, Haiteng Zhao, Jun Liu, Qika Lin, Zhiyong Wu

2025-03-20

φ-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time
  Exploration and Exploitation

Summary

This paper discusses a new way to make AI language models generate text more effectively by carefully balancing exploration and exploitation during the text generation process.

What's the problem?

AI language models can struggle to generate the best text because they either explore too many possibilities without focusing or don't explore enough and miss better options.

What's the solution?

The researchers created a method called phi-Decoding, which uses simulated future steps to estimate the best path forward and then prunes less promising options to focus on the most promising ones.

Why it matters?

This work matters because it can help AI language models generate higher-quality text more efficiently, making them more useful in various applications.

Abstract

Inference-time optimization scales computation to derive deliberate reasoning steps for effective performance. While previous search-based strategies address the short-sightedness of auto-regressive generation, the vast search space leads to excessive exploration and insufficient exploitation. To strike an efficient balance to derive the optimal step, we frame the decoding strategy as foresight sampling, leveraging simulated future steps to obtain globally optimal step estimation. Built on it, we propose a novel decoding strategy, named phi-Decoding. To provide a precise and expressive estimation of step value, phi-Decoding approximates two distributions via foresight and clustering. Sampling from the joint distribution, the optimal steps can be selected for exploitation. To support adaptive computation allocation, we propose in-width and in-depth pruning strategies, featuring a light-weight solution to achieve inference efficiency. Extensive experiments across seven benchmarks show phi-Decoding outperforms strong baselines in both performance and efficiency. Additional analysis demonstrates its generalization across various LLMs and scalability across a wide range of computing budgets. The code will be released at https://github.com/xufangzhi/phi-Decoding, and the open-source PyPI package is coming soon.