White paper: Explainable AI does not provide the explanations end-users are asking for
23rd June 2023
Author
White paper: Explainable AI does not provide the explanations end-users are asking for
Explainable Artificial Intelligence (XAI) techniques are frequently required by users in many AI systems with the goal of understanding complex models and their associated predictions, as well as gaining trust. While suitable for some specific tasks during development, their adoption by organisations to enhance trust in machine learning systems has unintended consequences.
In this paper, our senior data scientists discuss XAI's limitations in deployment and conclude that transparency alongside with rigorous validation are better suited to gaining trust in AI systems.
More in insights...