Revealing the journey is the true future of explainable AI.

Imagine a GPS that only gives you your final destination but none of the turns to get there. You’d end up lost, constantly second-guessing your route, or overshooting it entirely.

Many of today’s “explainable” AI systems work this way. They highlight an “important feature” or provide a confidence score, effectively skipping the crucial reasoning path that led to the conclusion. For a student trying to grasp a math solution or a doctor deciding on a treatment, that missing path is the difference between genuine understanding and blind faith.

This isn’t a new problem. Education research has been telling us for decades that knowledge doesn’t live in isolated facts — it lives in the connections between them. How we see those connections determines how well we learn.

We’ve Known This for Decades: Learning is a Journey

Back in the early 2000s, a massive meta-analysis by Nesbit & Adesope looked at 142 studies on concept mapping. Their conclusion was crystal clear: learners who built the maps themselves consistently outperformed those who just studied a pre-made one. The act of creating the path fosters a deeper understanding than just seeing the final picture.

A decade later, another study confirmed the same pattern, finding that novices, in particular, benefit the most from this process. Building a map forces you to think about how concepts connect, emphasizing the routes over memorizing landmarks.

Cognitive psychologists call this the “representational effect.” The way we externalize information — as a list of text versus a visual graph — fundamentally changes how we think. When learners see a problem as a graph, they start noticing constraints, dependencies, and pathways they would have missed in a simple paragraph. Humans simply need to see the path.

AI Is Finally Catching Up

AI research is beginning to embrace what educators have known all along: showing your work is essential.

Models like CogQA, designed for complex question-answering, started building explicit “cognitive graphs” to find answers. Rather than searching a sea of text, the AI constructed a map with entities as nodes and relationships as edges. It then navigated this map to connect the dots across multiple documents. The results were twofold: performance skyrocketed, and its reasoning process became visible as a clear path.

This same shift is now happening in Graph Neural Networks (GNNs), a powerful type of AI for understanding networks.

The message is convergent: the path is the explanation.

From Confusing Dashboards to Clear Paths

If you’ve ever used an educational dashboard, you’ve seen the charts: bar graphs of “progress” or pie charts of “skills mastered.” They tell you what happened but rarely why. They’re destinations without turns.

What would a path-based dashboard look like?

A Typical Dashboard Might Say:

“You scored 65% on derivatives.”

A Path-Based Dashboard Could Show:

A path card: “Your path was Limits → Derivatives → Chain Rule. It looks like the confusion started at the Limits concept, which is a key prerequisite for the next step.”

This approach ties directly into self-regulated learning (SRL). It’s a cycle: you plan your path, monitor your progress along it, and reflect on whether you need to change course. A well-designed AI tool evolves from a simple tracker into a navigator that helps you see the road ahead.

Why This Matters: The Map Shapes the Mind

How information is presented changes what we take away from it. A graph triggers conversations about relationships, while a list encourages memorization. In explainable AI, this means we can intentionally design interfaces that highlight prerequisite paths, compare alternative routes, or show the shortest path to mastery.

This isn’t just a theoretical vision. At the University of Victoria, Dr. Shengyao Lu and his colleagues are at the forefront of graph explainability, developing methods that move the field away from simplistic scores and toward rich, path-based reasoning. Bringing these advances into our educational and professional tools could be transformative.

Imagine an AI tutor that gives you the next problem while also showing you the logical path connecting it to your past work and future goals. That’s the bridge between cutting-edge AI and timeless human psychology.

Because in learning, as in life, we trust the path, not just the destination. Explainability should show us the route.

References

  1. Ding, M., Li, C., et al. (2019). CogQA: A Computational Framework for Cognitive Question Answering. arXiv:1905.05102.
  2. Nesbit, J. C., & Adesope, O. O. (2006). Learning with concept and knowledge maps: A meta-analysis. Review of Educational Research, 76(3), 413–448.
  3. Schroeder, N. L., Nesbit, J. C., et al. (2018). A meta-analysis of the effects of teaching and learning with concept maps on student science achievement. Journal of Research in Science Teaching, 55(6), 846–871.
  4. Yuan, H., Yu, H., et al. (2021). On Explainability of Graph Neural Networks via Subgraph Explorations. Proceedings of the 38th International Conference on Machine Learning (ICML).
  5. Zhang, J., & Norman, D. A. (1994). Representations in distributed cognitive tasks. Cognitive Science, 18(1), 87–122.
  6. Zhang, Y., Liu, X., et al. (2023). Path-based Explanations for Graph Neural Networks using Path Generation. arXiv:2305.10714.