Trustworthy AI starts with a simple question: would you trust an algorithm to support a surgeon during an operation, advise a new mother after childbirth, help decide whether a family receives a loan, or guide public investment decisions?
Across Europe, artificial intelligence is becoming part of decisions that affect people’s health, finances and everyday lives. Yet many AI systems still face the same challenge: people do not trust what they cannot understand.
This is where TANGO takes a different approach.
Rather than replacing human judgement, TANGO develops AI systems that work with people, not instead of them. At the heart of the project is the idea of “human-in-the-loop” AI: systems that support decision-making while keeping humans firmly in control.
But human oversight alone is not enough. To be truly trustworthy, AI also needs to explain itself in ways that make sense to different users.
A surgeon, for example, may want to know which medical indicators led the system to recommend one course of action rather than another. A woman receiving postpartum support may need a simpler explanation, focused on why a certain recommendation is relevant to her situation. A loan officer may need to understand which factors influenced a credit assessment, while a policymaker may want to compare the likely impact of different funding scenarios.
TANGO addresses this challenge through cognition-aware explainable AI. In practice, this means that the system adapts its explanations to the person using it. The same AI model can provide different levels of detail, different visualisations and different forms of reasoning, depending on whether the user is a clinician, citizen, financial expert or public authority.
This approach is being tested in four real-world settings.
In healthcare, TANGO supports clinical decision-making during surgery and provides guidance in pregnancy and postpartum care. In finance, it helps make credit assessment more transparent and easier to understand. In the public sector, it supports evidence-based policymaking and funding allocation.
These examples may seem very different, but they all point to the same conclusion: people are more likely to trust AI when they can understand how it works, question its recommendations and remain part of the final decision.
This is also increasingly important in the European policy context. The EU AI Act places strong emphasis on transparency, accountability and human oversight, particularly in high-risk sectors such as health, finance and public administration. TANGO contributes directly to these priorities by designing AI systems that are not only technically effective, but also aligned with European values.
The future of AI in Europe will not be about machines making decisions alone. It will be about creating systems that help people make better decisions, with greater confidence, responsibility and trust.
Written by Maria Carolan, Carr Communications