< Explain other AI papers

Language Server CLI Empowers Language Agents with Process Rewards

Yifan Zhang, Lanser Contributors

2025-10-28

Language Server CLI Empowers Language Agents with Process Rewards

Summary

This paper introduces Lanser-CLI, a tool designed to make interactions between AI coding assistants and actual code more reliable and predictable. It aims to bridge the gap between what AI *thinks* the code is and what the code *actually* is, reducing errors and improving the safety of automated code changes.

What's the problem?

Current AI coding tools, like large language models, often make mistakes when working with code. They can 'hallucinate' APIs that don't exist or incorrectly identify where to make changes in a file. While language servers provide accurate information about code, they aren't typically used to guide these AI agents in a way that ensures the changes are correct and safe. Essentially, AI tools aren't grounded in the verifiable reality of the code they're manipulating.

What's the solution?

Lanser-CLI acts as a middleman, connecting AI agents to a language server. It provides a more precise way to pinpoint code locations using a special 'selector' language, and it packages language server responses into consistent 'analysis bundles'. Crucially, it adds safety checks before any code is changed, like previewing the changes and making sure they can be applied without breaking things. It also creates a 'process reward' system, giving the AI feedback based on whether its actions align with the language server's understanding of the code, encouraging it to make valid and safe edits.

Why it matters?

This work is important because it makes AI-powered coding tools more trustworthy and useful. By ensuring that AI agents are working with accurate information and making safe changes, Lanser-CLI can help developers automate more tasks and reduce the risk of introducing errors into their code. It also opens the door to better understanding *why* an AI agent made a particular change, which is crucial for debugging and improving these tools.

Abstract

Large language models routinely hallucinate APIs and mislocalize edits, while language servers compute verified, IDE-grade facts about real code. We present Lanser-CLI, a CLI-first orchestration layer that pins and mediates a Language Server Protocol (LSP) server for coding agents and CI, exposing deterministic, replayable workflows. Our position is that language servers provide not only structural information (definitions, references, types, diagnostics) but also an actionable process reward: machine-checked, step-wise signals that align an agent's planning loop with program reality. In this work, Lanser-CLI contributes: (i) a robust addressing scheme beyond brittle "file:line:col" via a Selector DSL (symbolic, AST-path, and content-anchored selectors) with a principled relocation algorithm; (ii) deterministic Analysis Bundles that normalize Language Server responses and capture environment/capability metadata with stable content hashes; (iii) a safety envelope for mutating operations (rename, code actions) with preview, workspace jails, and Git-aware, transactional apply; and (iv) a process-reward functional derived from Language Server facts (diagnostic deltas, disambiguation confidence, and safe-apply checks) that is computable online and replayable offline. We formalize determinism under frozen snapshots and establish a monotonicity property for the process reward, making it suitable for process supervision and counterfactual analysis. Project Page: https://github.com/yifanzhang-pro/lanser-cli