< Explain other AI papers

Selective Steering: Norm-Preserving Control Through Discriminative Layer Selection

Quy-Anh Dang, Chris Ngo

2026-01-28

Selective Steering: Norm-Preserving Control Through Discriminative Layer Selection

Summary

This paper focuses on making large language models (LLMs) safer by controlling their behavior when someone tries to trick them into saying or doing harmful things, a type of attack called 'adversarial attacks'.

What's the problem?

LLMs, even the advanced ones, can still be manipulated into generating harmful content. Existing methods to prevent this, called 'activation steering', have issues. Simply adding or subtracting from the model's internal workings is tricky to get right and can easily mess things up. A newer technique called 'Angular Steering' tries to be more precise, but it changes the model's internal values in a way that causes it to perform poorly, especially in smaller models.

What's the solution?

The researchers developed a new method called 'Selective Steering'. It improves on previous techniques in two main ways. First, it uses a mathematically sound way to adjust the model's internal values *without* changing their overall strength, which prevents the performance issues seen in Angular Steering. Second, it smartly chooses *where* to make these adjustments within the model, only focusing on the parts that are most relevant to controlling the harmful behavior. This targeted approach makes the steering more effective.

Why it matters?

This work is important because it provides a more reliable and effective way to control LLMs and prevent them from being exploited for malicious purposes. It allows for safer and more predictable behavior from these powerful AI systems, and it works well even on smaller models where previous methods failed. This means we can build more trustworthy AI applications.

Abstract

Despite significant progress in alignment, large language models (LLMs) remain vulnerable to adversarial attacks that elicit harmful behaviors. Activation steering techniques offer a promising inference-time intervention approach, but existing methods suffer from critical limitations: activation addition requires careful coefficient tuning and is sensitive to layer-specific norm variations, while directional ablation provides only binary control. Recent work on Angular Steering introduces continuous control via rotation in a 2D subspace, but its practical implementation violates norm preservation, causing distribution shift and generation collapse, particularly in models below 7B parameters. We propose Selective Steering, which addresses these limitations through two key innovations: (1) a mathematically rigorous norm-preserving rotation formulation that maintains activation distribution integrity, and (2) discriminative layer selection that applies steering only where feature representations exhibit opposite-signed class alignment. Experiments across nine models demonstrate that Selective Steering achieves 5.5x higher attack success rates than prior methods while maintaining zero perplexity violations and approximately 100\% capability retention on standard benchmarks. Our approach provides a principled, efficient framework for controllable and stable LLM behavior modification. Code: https://github.com/knoveleng/steering