< Explain other AI papers

Feedback Friction: LLMs Struggle to Fully Incorporate External Feedback

Dongwei Jiang, Alvin Zhang, Andrew Wang, Nicholas Andrews, Daniel Khashabi

2025-06-16

Feedback Friction: LLMs Struggle to Fully Incorporate External Feedback

Summary

This paper talks about a problem called Feedback Friction in large language models (LLMs), which means these models have a hard time fully using feedback from outside to fix their mistakes, even when the feedback is very clear and almost perfect. Researchers tested this by letting the model try to solve problems, giving it good feedback, and then letting it try again, but the models still struggled to improve completely.

What's the problem?

The problem is that when these big AI models make mistakes and get feedback telling them what they did wrong, they don’t always learn from it like we expect. Instead, they often keep making similar mistakes or don’t fully improve, which limits how well they can get better by themselves.

What's the solution?

The researchers experimented with ways to help the models use feedback better by making the model try different answers over time, like increasing randomness or not letting it repeat wrong answers it gave before. These tricks helped a little by encouraging the model to explore new ideas, but they didn’t fix the problem completely or let the models reach their full potential.

Why it matters?

This matters because if AI models can’t fully learn from feedback, it makes it harder to improve them quickly and reliably, especially for complicated tasks like math or reasoning. Understanding and reducing Feedback Friction is important so future AI can become better at fixing its own mistakes and adapting without needing a lot of extra training.

Abstract

LLMs show resistance to feedback, termed feedback friction, even under ideal conditions, and sampling-based strategies only partially mitigate this issue.