Concept explained: XAI

23rd June 2023
page image
Author
Nelson Peace
linkedinemail

Concept explained: XAI

As adoption of Artificial Intelligence (AI) becomes widespread and mainstream, demands to understand how it actually works are increasing.

With relatively low stakes applications (phone camera face detection, sentence completion on search engines), users have been generally accepting of the results without questioning the ‘how’ and ‘why’. But machines are now used to aid human decision making in fields such as cancer detection, legal proceedings, and military operations.

These higher stakes scenarios bring an accompanying complication: understandably, users want to understand how machines make the decisions they do so they can trust the result. It is this complication that has given rise to what is now a large body of research known as Explainable AI (XAI).

What is it?

-

Explainable AI (XAI) is a group of methods that give deeper insight into what a complicated data science model is doing. Most data science models are ‘black box systems’, which means we can only observe the inputs and outputs – what happens in between is opaque. This is due to the vast number of interactions occurring between the numerous model features.

Why use it?

-

The broad aim of using XAI is to find out which inputs are influencing the outputs of the model. Not all inputs are equally useful in forming a prediction. By identifying impactful features, we increase our understanding of how the model is making predictions.

This is useful for gaining insights about the model, avoiding unintentional bias, and for increasing trust and decreasing risk in the model.

What are the pitfalls?

-

XAI can’t replace the data science model – it’s a tool used to develop the model, not a standalone technique. The relationships between features that XAI reveals can be tantalisingly simple, but used without context they can easily lead to the wrong conclusion.

XAI can be used to ‘game the system’: knowing which features are important means inputs could be tweaked to achieve a certain result, or to justify a biased model.

A truly reliable machine is one that doesn’t need to explain its predictions because they’ve been tested repeatedly by developers via rigorous validation. A useful analogy is that of prescription drugs. Our faith in using them stems not (or, rarely) from our knowledge of their mechanism on our bodies. Rather it stems from knowing that scientists, government bodies and test subjects have taken the necessary steps to ensure the result.

Data science models should be validated and tested by developers, either with or without the help of XAI, until users have no desire for an explanation. The time spent building a second machine to explain the first machine is better invested in exhaustive validation. We don’t demand an explanation from a biology textbook every time we visit the doctor. AI developers can and should validate away the need for XAI.

Contact us