
TANGO BLOGPOST
The rise of everyday AI
Artificial intelligence (AI) is seamlessly integrated into our daily lives—from the algorithms that find the best routes on our GPS to the recommendations we get on Netflix. With the emergence of ChatGPT and other large language models (LLMs), the presence of AI has become even more noticeable. However, this increasing prominence of AI raises important issues regarding privacy and ethics, making the TANGO project highly relevant.
From a human-centered perspective, it’s crucial to address several questions as we move towards a future where humans and AI collaborate more closely. For example, how effective do we believe AI is in performing specific tasks? Can we trust the explanations provided by AI? One approach to answering these questions is to compare our expectations and behaviors when interacting with AI to those we have when dealing with humans.
Fast thinking, slow thinking
Let’s talk about human thinking. One popular framework views human reasoning as involving two thinking systems: intuition, also known as System 1 or Type 1 thinking, and deliberation, also known as System 2 or Type 2 thinking. It is commonly understood that intuition is fast and effortless while deliberation is slow and effortful.
We have been investigating what people think of other humans engaging in either intuition or deliberation to solve reasoning problems.
Across several studies, our findings consistently point to a preference for deliberation. When the efficiency of intuition is emphasized, or when it leads to the same level of performance as deliberation, it seems that humans value taking time to think.
This suggests that in order to build a trustworthy AI system, a human-centered design should make salient the thoughtfulness underlying its decisions. However, what works for humans may not work for machines.
Fast for me, slow for thee
We often assume that thinking takes time. When it comes to humans, we prefer other people to engage in the right amount of thinking, namely that they don’t spend time on an easy problem (e.g., How much is 2+2?) and conversely engage in deep, slow thinking when dealing with a hard problem (e.g., “Find the prime factors of 1887”).
However, what is considered fast and slow for humans might not be perceived the same for machines. For instance, if it took 30 seconds for a gifted mathematics student to factorize 1887 (3 x 17 * 37 by the way) but it took 5 minutes for a high-schooler, we’d consider the former to be fast. However, we’d expect a computer software to answer instantly and would consider 30 seconds to be slow, and may even take it as an indication of faulty software, which would then decrease our trust in its answer.
That’s why our future investigations will focus on whether we react differently to AI systems instead of humans, as our preference for slow thinking may not translate directly to machines. Still, that’s not the only way knowledge about human cognition can better inform the design of AI systems.
The power of explanations
As AI becomes increasingly integrated into decision-making across various domains—from medical diagnosis and financial markets to entertainment recommendations—the inner workings of AI systems often remain opaque. Ensuring the explainability of AI is essential for fostering human understanding and trust in its decisions or advice.
To improve the transparency and explainability of AI, it is crucial to first understand the mechanisms underlying how humans produce and evaluate explanations. Interestingly, intuition and deliberation appear to play distinct roles in these processes. Our latest findings suggest that people are better at justifying deliberate, well-thought-out decisions than intuitive ones, which could relate to our preference for deliberative decision-makers. Furthermore, while we tend to favor arguments that align with our pre-existing opinions, engaging in deliberative thinking enables us to more objectively assess the true quality of supporting explanations.
Incorporating this knowledge into the design of AI systems can enhance the way they generate explanations, ensuring these systems are better understood and trusted by humans.
Conclusion
By bridging insights from human reasoning with AI design, we can create more transparent, effective, and user-centered technologies.
Written by: Nicolas Beauvais, Wim De Neys, Matthieu Raoelison, University Paris Cité (UPC)