Scroll Top
19th Ave New York, NY 95822, USA
Untitled design (42)

As AI systems increasingly influence big life decisions — like who gets a loan, a job offer, or medical treatment — a key question keeps popping up: are these models actually fair?
When we talk about “fairness” in AI, we can look at it in two ways. On the one hand, there’s individual fairness, which asks: “Are people who are alike being treated alike by the system?” On the other hand, there’s group fairness, which checks how performance changes across subpopulations defined by gender, race, age, sexual orientation, and so on.
In our project, we consider the popular group-fairness metric called worst-group accuracy. As the name suggests, it measures the model’s accuracy on the group with the lowest accuracy, to make sure no demographic group gets left behind.
Based on these notions of fairness regarding AI systems, multiple methods have been built to improve the fairness of those systems. However, many fairness-enhancing methods assume something quite unrealistic: that we always know each person’s group identity (like race, gender, or other sensitive attributes). But in real life, that information is often missing — and sometimes, we shouldn’t even be collecting it.
That’s where SPECTRE (SPECTral uncertainty set for Robust Estimation) comes in — a new method designed to boost fairness without needing access to demographic data.
SPECTRE: A Fresh Take on Fairness
SPECTRE is based on a concept called robust optimization, which focuses on improving performance in worst-case scenarios. But here’s the twist: instead of obsessing over extreme outliers (which often makes models overly pessimistic and less useful), SPECTRE takes an alternative route. It uses a technique involving spectral analysis and Fourier features to tweak just what’s necessary. The result? A model that doesn’t need group information but still manages to be fair and effective.
But… Does It Actually Work?
Yes, it does. SPECTRE was tested on real-world data from the American Community Survey (ACS), covering 20 U.S. states — and it delivered. The following image shows the probability densities of the overall accuracy and the worst group accuracy (regarding sensitive groups defined by race) corresponding to those of states. These results show that SPECTRE provides the safest fairness guarantees across the different states compared to state-of-the-art approaches.

But it did not only work well considering sensitive groups based on race, it also demonstrated safe fairness guarantees when considering intersectional groups. The following figure shows the performance of SPECTRE where the sensitive groups are defined based on the intersection between race and gender, resulting in 18 different groups.

Intersectional fairness considers how AI systems perform across groups defined by combinations of multiple attributes, since individuals at these intersections may experience compounded disadvantages due to the overlapping effects of different systems of oppression. Explicitly addressing intersectionality can be cumbersome as we need to collect all possible combinations of protected characteristics, increasing the number of groups and decreasing the number of samples in each of the groups. However, SPECTRE overcomes that limitation because it does not require to define the groups explicitly, as it is oblivious to the sensitive information. But since it optimized the classification rule with respect to the most vulnerable instances, it is indirectly improving the performance of the classifier in these intersectional groups.
In other words, since we do not put focus on a particular sensitive attribute and define privileged/unprivileged groups based on that, SPECTRE tries to boost its performance in the most vulnerable instances across different dimensions of the sensitive information simultaneously.
To sum up, compared to state-of-the-art methods (even ones that do use demographic info), SPECTRE offered better worst-group fairness guarantees and it pulled this off without sacrificing overall accuracy, which is no small feat.
A Step Toward More Ethical AI
In a world where protecting privacy is just as important as ensuring fairness, tools like SPECTRE offer a hopeful way forward. They show us that it is possible to design responsible and equitable algorithms without requiring to collect protected characteristic information, which are “special categories” of personal data under Article 9 of GDPR. This means that processing such data requires a specific legal basis and additional safeguards.
Because in the end, artificial intelligence shouldn’t just be smart — it should also be fair.

Written by: Ainhize Barrainkua and Novi Quadrianto, BCAM.