Why ‘more transparency’ is a trap, and the future of explainable AI is adaptive.
Imagine you’re a teacher explaining a math problem to two students. One is meticulous, double-checking every step. The other is a skimmer who gets impatient with long passages and just wants the key example.
If you give them the exact same explanation, you’ll lose one of them. Guaranteed.
That’s the paradox of explainable AI (XAI) today. We build these incredibly sophisticated systems, yet we design their explanations as if every single user thinks and learns in the exact same way. But decades of learning science tell us the opposite is true. People differ in their skills, their personalities, and even their tolerance for detail.
For AI tutors and educational tools to be genuinely helpful, their explanations can’t be one-size-fits-all. They need to adapt.
For years, the goal in XAI has been “more transparency.” The thinking was simple: show more of the model’s inner workings — the feature weights, the attention scores — and users will understand.
But a landmark 2021 study involving nearly 3,800 people found that this isn’t true. Making a model more “transparent” didn’t actually help people make better decisions with its help. In fact, the flood of extra detail sometimes distracted them, making them worse at spotting the AI’s mistakes.
Cognitive psychology has a term for this: the illusion of explanatory depth. It’s the feeling that you understand something complex (like a bicycle), right up until someone asks you to explain precisely how it works. A superficial, “transparent” explanation can actually reinforce this illusion, giving us a false sense of confidence.
Research from the University of British Columbia’s Cristina Conati has shown this in practice. In one study, her team found that while explanations in an AI tutor increased trust, their effect on learning was messy. Students with lower conscientiousness, for instance, benefited more from the explanations than their meticulous peers. In a 2024 follow-up, they found that learners with higher reading proficiency could handle verbose explanations, while others learned better from concise ones.
The message is clear: more transparency isn’t the answer. Right-sized transparency is.
So, what if we treated explanations like a dimmer switch instead of an on/off button?
An adaptive AI could adjust the “density” of its explanations based on who is asking and what they need in the moment.
A system could learn to adjust this density automatically based on signals of cognitive load, like a user’s error rate, the time they spend on a task, or even a simple feedback slider.
This is what great teachers do instinctively. They sense when a student is overwhelmed and simplify; they know when to challenge another with more depth. An adaptive explanation system could finally do the same.
Even with the right amount of detail, the design of an explanation matters. A chart or a diagram doesn’t just display information; it nudges you to think in a certain way. A bar chart invites you to compare rankings. A flow chart invites you to follow a sequence. This idea of cognitive affordances is crucial for design.
We can apply this to XAI design to encourage specific kinds of thinking:
Furthermore, explanations should push learners from being passive observers to active participants. The ICAP framework on student engagement shows that learning skyrockets when students move from simply Passive reading to being Active (highlighting), Constructive (summarizing in their own words), or Interactive (debating the point). An explanation shouldn’t be a dead end; it should be a prompt. Instead of a statement, it could be a question: “Can you justify why this step is correct?”
A world where every person gets the same static explanation from an AI is like a classroom where every student is handed the same textbook page, regardless of their needs. We can, and must, do better.
By blending principles from human-AI interaction, educational psychology, and user-centered design, we can build systems that finally get this right.
The future of explainability isn’t just about creating systems that are transparent. It’s about creating systems that know how to explain themselves differently to every single one of us.