< Explain other AI papers

The Y-Combinator for LLMs: Solving Long-Context Rot with λ-Calculus

Amartya Roy, Rasul Tutunov, Xiaotong Ji, Matthieu Zimmer, Haitham Bou-Ammar

2026-03-23

The Y-Combinator for LLMs: Solving Long-Context Rot with λ-Calculus

Summary

This paper introduces a new way to help large language models (LLMs) handle really long pieces of text, which is a current limitation for these models.

What's the problem?

LLMs are great at reasoning, but they struggle with very long inputs because they have a limited 'context window' – they can only 'remember' a certain amount of text at a time. Existing solutions, called Recursive Language Models, try to break down long problems into smaller steps, but they do this by letting the model write its own instructions for how to proceed, which is like giving it free rein to write code. This is risky because it's hard to predict what the model will do, verify if it's correct, or even guarantee it will finish.

What's the solution?

The researchers developed a system called λ-RLM which is a more structured approach. Instead of letting the model write its own instructions, it uses a pre-defined set of safe and reliable 'building blocks' based on a mathematical system called lambda calculus. Think of it like LEGOs – the model can combine these blocks in specific ways, but it can't invent new ones. This makes the process much more predictable and controllable, and focuses the model's reasoning power on the actual problem, not on figuring out *how* to solve it. It essentially turns the reasoning process into a clear, step-by-step program.

Why it matters?

This work is important because it makes long-context reasoning with LLMs more reliable and efficient. By using a structured approach, λ-RLM avoids the pitfalls of letting the model generate its own code, leading to better accuracy, faster processing, and guarantees about how the system will behave. This opens the door to using LLMs for more complex tasks that require processing large amounts of information.

Abstract

LLMs are increasingly used as general-purpose reasoners, but long inputs remain bottlenecked by a fixed context window. Recursive Language Models (RLMs) address this by externalising the prompt and recursively solving subproblems. Yet existing RLMs depend on an open-ended read-eval-print loop (REPL) in which the model generates arbitrary control code, making execution difficult to verify, predict, and analyse. We introduce λ-RLM, a framework for long-context reasoning that replaces free-form recursive code generation with a typed functional runtime grounded in λ-calculus. It executes a compact library of pre-verified combinators and uses neural inference only on bounded leaf subproblems, turning recursive reasoning into a structured functional program with explicit control flow. We show that λ-RLM admits formal guarantees absent from standard RLMs, including termination, closed-form cost bounds, controlled accuracy scaling with recursion depth, and an optimal partition rule under a simple cost model. Empirically, across four long-context reasoning tasks and nine base models, λ-RLM outperforms standard RLM in 29 of 36 model-task comparisons, improves average accuracy by up to +21.9 points across model tiers, and reduces latency by up to 4.1x. These results show that typed symbolic control yields a more reliable and efficient foundation for long-context reasoning than open-ended recursive code generation. The complete implementation of λ-RLM, is open-sourced for the community at: https://github.com/lambda-calculus-LLM/lambda-RLM.