
TANGO BLOGPOST
Machine learning models are increasingly used in high-stakes decision-making, such as loan approvals, job hiring, and judicial rulings. However, these models often make opaque decisions, leaving users without explanations or means to reverse unfavourable outcomes (e.g., having our loan application rejected). Algorithmic Recourse (AR) [1] aims to address this issue by identifying a sequence of actions users can take to overturn a negative decision. Existing AR methods assume that the cost of actions is the same for all users, which leads to unfair or impractical recommendations. For example, an AR method might suggest reducing expenses to qualify for a loan, ignoring that a user may have high medical costs for a personal illness. Therefore, there is a critical need for personalized algorithmic recourse that accounts for individual preferences and constraints.
Personalized Algorithmic Recourse
To address this problem, we introduce PEAR (Preference Elicitation for Algorithmic Recourse) [2], a human-in-the-loop approach that tailors AR to individual users. PEAR is unique because it does not assume pre-defined costs for actions but instead learns them through user feedback using Bayesian Preference Elicitation which is a technique that refines estimates of action costs by presenting users with a small set of possible interventions and asking them to choose the most suitable option. Over multiple iterations, PEAR updates its model of the user’s preferences and generates increasingly better recourse plans. The system also integrates Reinforcement Learning and Monte Carlo Tree Search, which allow it to quickly identify promising sequences of actions while minimizing unnecessary effort for the user. This process ensures that the final recommendation is not only effective in overturning the original decision but also practical for the user to implement.
One of the key contributions of this study is its ability to produce personalized recourse strategies that require significantly less effort compared to traditional approaches. By dynamically learning individual preferences, PEAR avoids suggesting costly or unrealistic interventions. Additionally, the use of Expected Utility of Selection (EUS) ensures that user feedback is optimally leveraged to refine cost estimates, making the system both efficient and robust to noisy responses. Our research also highlights the importance of considering dependencies between different actions, recognizing that the cost of one intervention may influence the feasibility of another. For example, obtaining a higher degree can reduce the effort required to switch to a higher-paying job. PEAR accounts for these relationships when generating recommendations, making it more accurate and user-friendly than existing methods.
Empirical validation of PEAR was conducted using real-world datasets related to financial decisions, such as loan approvals and income predictions. The results demonstrate that PEAR consistently produces lower-cost interventions than non-personalized competitors, reducing the financial burden on users by up to 50%. The system was also tested in different scenarios, including users who require minimal changes to achieve recourse and those for whom overturning the decision is significantly more complex. Even in the latter case, PEAR was able to adapt to individual constraints and suggest effective interventions with high success rates.
Towards Realistic Applications of Recourse
The real-world implications of PEAR are substantial. In the financial sector, it can help individuals understand why they were denied a loan and provide realistic, tailored recommendations to improve their eligibility. In employment and hiring processes, PEAR can guide job applicants on practical steps to enhance their qualifications based on their specific constraints. In healthcare and insurance, it can be used to recommend lifestyle or financial adjustments that allow individuals to qualify for better policies. Furthermore, in judicial applications, the approach can support fairer decision-making by taking into account individual circumstances in risk assessments.
Our research underscores the necessity of making algorithmic recourse more user-centred and adaptable. Many current methods treat users as passive recipients of recommendations, whereas PEAR actively involves them in the process, ensuring that the final intervention aligns with their real-world constraints. This human-in-the-loop approach not only enhances fairness but also increases the likelihood that users can successfully implement the suggested changes.
PEAR represents a major step forward in algorithmic recourse by making it personalized, interactive, and computationally efficient. It allows users to take control of machine-generated decisions with realistic, cost-effective interventions. Lastly, the study highlights the need for fairer, user-centred AI systems, paving the way for broader adoption in real-world applications.
[1] Wachter, Sandra, Brent Mittelstadt, and Chris Russell. “Counterfactual explanations without opening the black box: Automated decisions and the GDPR.” Harv. JL & Tech. 31 (2017): 841.
[2] Giovanni De Toni, Paolo Viappiani, Stefano Teso, Bruno Lepri, Andrea Passerini, “Personalized Algorithmic Recourse with Preference Elicitation”, Transactions on Machine Learning Research (2024)