Fidelity-Aware Recommendation Explanations via Stochastic Path Integration
Oren Barkan, Yahlly Schein, Yehonatan Elisha, Veronika Bogina, Mikhail Baklanov, Noam Koenigstein
2025-11-25
Summary
This paper introduces a new method, SPINRec, for understanding *why* recommendation systems suggest certain items to users. It focuses on making sure the explanations given actually reflect how the system is making its decisions, a concept called 'explanation fidelity'.
What's the problem?
Currently, it's hard to know if the explanations provided by recommendation systems are truthful. Existing methods often rely on unrealistic assumptions or don't fully account for all the information the system uses, leading to explanations that don't accurately represent the system's reasoning. They struggle with the fact that recommendation data is often incomplete – we only see what users *did* interact with, not what they *didn't*.
What's the solution?
SPINRec tackles this by using a technique inspired by physics called 'path integration'. Instead of trying to figure out the reasoning from a single, potentially flawed starting point, it considers many different possible user profiles based on real user data. It then finds the most likely 'path' of reasoning that leads to a recommendation, giving a more reliable explanation. It essentially asks 'what kind of user would this recommendation make sense for?' and uses that to explain the suggestion.
Why it matters?
This work is important because trustworthy explanations are crucial for building user confidence in recommendation systems. If users understand *why* something is recommended, they're more likely to trust and use the system. SPINRec sets a new standard for how accurately we can explain recommendations, and the tools released with the paper allow other researchers to build on this work and improve explainability in the field.
Abstract
Explanation fidelity, which measures how accurately an explanation reflects a model's true reasoning, remains critically underexplored in recommender systems. We introduce SPINRec (Stochastic Path Integration for Neural Recommender Explanations), a model-agnostic approach that adapts path-integration techniques to the sparse and implicit nature of recommendation data. To overcome the limitations of prior methods, SPINRec employs stochastic baseline sampling: instead of integrating from a fixed or unrealistic baseline, it samples multiple plausible user profiles from the empirical data distribution and selects the most faithful attribution path. This design captures the influence of both observed and unobserved interactions, yielding more stable and personalized explanations. We conduct the most comprehensive fidelity evaluation to date across three models (MF, VAE, NCF), three datasets (ML1M, Yahoo! Music, Pinterest), and a suite of counterfactual metrics, including AUC-based perturbation curves and fixed-length diagnostics. SPINRec consistently outperforms all baselines, establishing a new benchmark for faithful explainability in recommendation. Code and evaluation tools are publicly available at https://github.com/DeltaLabTLV/SPINRec.