Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models
Aashiq Muhamed, Mona Diab, Virginia Smith
2024-11-05

Summary
This paper discusses Specialized Sparse Autoencoders (SSAEs), a new method designed to help large language models (FMs) better understand and interpret rare concepts in data. The goal is to improve how these models capture important but infrequently seen ideas.
What's the problem?
Foundation models often struggle to interpret rare concepts because traditional Sparse Autoencoders (SAEs) may not effectively capture these elusive features. This limitation can hinder the model's ability to understand and respond accurately to specific situations or data points that are crucial for tasks like bias detection or nuanced reasoning.
What's the solution?
To address this issue, the authors developed SSAEs, which focus on specific subdomains to better identify and learn rare concepts. They provide a detailed approach for training SSAEs, including using dense retrieval methods to select relevant data and a training objective called Tilted Empirical Risk Minimization to enhance concept recall. The authors demonstrated that SSAEs perform better than general-purpose SAEs in capturing these rare concepts, as shown through experiments on various metrics and a case study involving bias in bios data.
Why it matters?
This research is important because it enhances the interpretability of large language models, making them more reliable and effective in understanding complex data. By improving how these models capture and interpret rare concepts, SSAEs can lead to better AI systems that are more aware of subtle biases and can provide more accurate responses in critical applications like healthcare, finance, and social justice.
Abstract
Understanding and mitigating the potential risks associated with foundation models (FMs) hinges on developing effective interpretability methods. Sparse Autoencoders (SAEs) have emerged as a promising tool for disentangling FM representations, but they struggle to capture rare, yet crucial concepts in the data. We introduce Specialized Sparse Autoencoders (SSAEs), designed to illuminate these elusive dark matter features by focusing on specific subdomains. We present a practical recipe for training SSAEs, demonstrating the efficacy of dense retrieval for data selection and the benefits of Tilted Empirical Risk Minimization as a training objective to improve concept recall. Our evaluation of SSAEs on standard metrics, such as downstream perplexity and L_0 sparsity, show that they effectively capture subdomain tail concepts, exceeding the capabilities of general-purpose SAEs. We showcase the practical utility of SSAEs in a case study on the Bias in Bios dataset, where SSAEs achieve a 12.5\% increase in worst-group classification accuracy when applied to remove spurious gender information. SSAEs provide a powerful new lens for peering into the inner workings of FMs in subdomains.