Stairway to Fairness: Connecting Group and Individual Fairness
Theresia Veronika Rampisela, Maria Maistro, Tuukka Ruotsalo, Falk Scholer, Christina Lioma
2025-09-03
Summary
This paper investigates the connection between two different ideas of fairness in recommendation systems: making sure different groups of people are treated equally, and making sure each individual person receives fair recommendations. It finds that improving fairness for groups doesn't automatically mean improving fairness for individuals, and can sometimes even make things worse for individuals.
What's the problem?
Recommendation systems, like those used by Netflix or Amazon, can unintentionally be unfair. There are two main ways to think about fairness: group fairness, which means different groups (like men and women) receive similar recommendations, and individual fairness, which means similar people receive similar recommendations. However, no one has really looked at how these two types of fairness relate to each other because researchers have used different ways to measure and achieve each one, making direct comparison impossible. This means we don't know if trying to make recommendations fairer for groups will also make them fairer for individuals, or if there's a trade-off.
What's the solution?
The researchers did a thorough comparison of different ways to measure both group and individual fairness. They ran experiments on three different datasets, repeating each experiment eight times, to see how different fairness measures interacted. They specifically looked to see if recommendations that scored well on group fairness measures also scored well on individual fairness measures. Their analysis revealed a surprising result: recommendations that were very fair to groups of people were often unfair to individuals.
Why it matters?
This research is important because it shows that simply focusing on group fairness in recommendation systems isn't enough. If you want a truly fair system, you need to consider both group and individual fairness, and understand that improving one doesn't guarantee improving the other. This is useful information for people who build and manage recommendation systems, helping them design systems that are fairer for everyone.
Abstract
Fairness in recommender systems (RSs) is commonly categorised into group fairness and individual fairness. However, there is no established scientific understanding of the relationship between the two fairness types, as prior work on both types has used different evaluation measures or evaluation objectives for each fairness type, thereby not allowing for a proper comparison of the two. As a result, it is currently not known how increasing one type of fairness may affect the other. To fill this gap, we study the relationship of group and individual fairness through a comprehensive comparison of evaluation measures that can be used for both fairness types. Our experiments with 8 runs across 3 datasets show that recommendations that are highly fair for groups can be very unfair for individuals. Our finding is novel and useful for RS practitioners aiming to improve the fairness of their systems. Our code is available at: https://github.com/theresiavr/stairway-to-fairness.