AI Explainability: This is why your company needs to reduce bias and increase transparency

The rush to embrace artificial intelligence (AI) means increasing numbers of companies are relying on mysterious systems that provide no explanation for the crucial decisions they make. This is why we need Explainable AI.

By Roberto Stelling, Data Science Advisor, Office of the CTO Research Office, Dell Technologies

Evidence suggests enterprise commitment to artificial intelligence (AI) will continue unabated. Global spending on AI is expected to double over the next four years, growing from $50.1 billion in 2020 to more than $110 billion in 2024, according to market intelligence provider IDC.

Companies invest in AI systems because they generate quick and accurate answers to challenging questions, be that credit card applications for consumers or directions of travel for autonomous vehicles. Yet as you rush to embrace the technology, you must remember that decisions made by AI systems impact human beings.

As multi-layered deep neural networks (DNNs) have deepened in complexity, they have become “black boxes” (that is, data-fed systems for automated decision-making whose operations are not visible nor understandable to users). While we might understand the inputs and outputs of these AI systems, their internal workings are a mystery, warns IFTF’s “Future of Connected Living” report for Dell Technologies.

This lack of transparency is undermining trust in AI systems. But of course, before you bet on AI, you need to trust it. Hence, 52 percent of IT decision-makers surveyed by Dell Technologies said they would take steps to improve data traceability and expose bias in order to promote trust in AI algorithms and AI decision-making. Forty-nine percent are seeking to build in more fail-safes and 44 percent would advocate for sensible regulation.

These are necessary measures. If AI systems can’t explain how they’ve reached their answers, how can consumers trust AI to make increasingly critical choices? Business leaders will want their sunk investments in AI to pay dividends—and that might mean taking a step back to consider how complex systems make potentially life-changing decisions.

Explainable AI, where technologists make attempts to rationalize the black box workings of machine learning solutions, can help rebuild user confidence in AI systems. Such explainability is essential to the effective evolution of DNNs and their deployment by your business.

Explainability Is Important

Explainable AI creates a narrative between the input data and the AI outcome. While black box AI makes it difficult to say how inputs influence outputs, explainable AI makes it possible to understand how outcomes are produced.

When it comes to accountability, explainability helps satisfy governance requirements. The General Data Protection Regulation (GDPR) provides individuals with a “right to an explanation” for decisions based solely on automated processing. Legislations like this are helping to push explainability in AI by enforcing the need for explainable outcomes—at some stage an organization will have to explain how and why its systems make decisions.

Second, explainability helps to build trust. Think, for example, of automated cars: Trust in the AI systems that power automated cars is a social need. Humans will put their lives in the hands of an automated vehicle that is also a potential liability to other drivers and pedestrians. Without trust, automated cars will remain stalled. Explainability, through information such as how the car works and why the car is making its decisions, will help society to trust driverless cars.

Finally, explainability helps ensure comparability. Imagine there are two data models that provide similar results: one offers a degree of explainability for its processes, while the other works as an accurate albeit unreadable black box. For data scientists, the AI system that can explain its results is preferable to one that cannot. Such explainability will make it easier for scientists to compare results and make users feel confident in the system’s outputs.

By dealing with concerns around accountability, trust, and comparability, explainable AI can help to reduce fears of bias and a lack of transparency. But we remain a long way from the goal of explainable AI. Right now, there is an explanation gap between the mathematical functions that power models and the outputs that systems produce. We will only fill this gap by building explainable AI systems.

Training and Fixing Systems

Explainability is also an important tool to help ensure that we properly identify, correct and improve AI-based systems in mission-critical scenarios.

Neither humans nor AI systems are infallible, but they both err in different ways with different consequences. While human errors are usually incremental in nature and build over time, AI errors can be disruptive and unexpected.

Let’s take the example of image classification. It is possible to add noise to a picture of a panda—such as changing the background environment—that means an AI system would classify it as a llama. Humans would never make a similar mistake.

Similarly, a realistic picture of a car or scenery in the back of a truck could confuse an autonomous vehicle’s image-recognition system. Researchers have found that it’s easy to confuse vehicle systems by altering street signs.

Such errors can have serious consequences. If a system hasn’t been trained with enough images, an overturned truck could be “invisible” to an automated car and might not be recognized as a danger. A tree trunk in the middle of the road, meanwhile, might not be recognized as a barrier.

It is possible, therefore, to fool an automated system into making mistakes that a human would not. Without explainable AI, we might not understand exactly why each one of these mistakes is being made—and we won’t be able to add the right kinds of images to train and fix the system.

Building Explainable AI

Explainable AI requires a strong narrative that allows humans to understand how AI systems work, including explanations of the model itself and the data that’s used. The output of these models should also come with clear explanations, something that is missing when we create outputs through DNNs.

An example: Forty years ago, a credit-application refusal might have come with an explanation that a customer did not have a satisfactory income level. Today, a credit application to a black box DNN produces a simple “yes” or “no” answer. Explainability, in short, is lacking.

An explainable AI system might not tell an individual exactly how to generate a successful application, but it could explain the reasons for a refusal. This kind of actionable explanation would help users to understand how the system came to its decision. It would build trust in AI.

Rather than the current black box working, explainable AI will provide a narrative to humans for the decisions that machines make. Explainable AI would no doubt take the form of a human-computer interface that offers layers of explainability that spread from the original outputs through to the feedback given by the AI system.

So returning to the credit application, an explainable AI interface—be that via data, text or, speech—would provide a description that explains the refusal, and posits suggestions as to how an individual might improve their chances of being accepted for credit, such as increasing income or cutting expenses.

The Future of Explainable AI

As investments in AI continue to climb, and cover mission-critical systems, so the demand for explainability will rise. Mature regulation will be a key driver. Almost half (44%) of business leaders want greater regulation over the ways in which we use AI. Articles within GDPR and the California Consumer Privacy Act will mean AI system creators will have to care about explainability as a matter of course.

Forrester believes there will be more progress toward trusted data for AI in 2021. The analyst says companies are facing pressure from consumer interest groups and regulators to prove data’s lineage for AI, including audit trails to ensure compliance and ethical use.

It is critical that technologists and business leaders recognize that AI systems, however effective they are at making decisions, cannot be allowed to operate in isolation. With AI systems affecting humans, mediation between humans and AI systems is necessary.

Humans need explainability from systems in a form they can understand. If we don’t provide these clear narratives, then trust in AI systems will suffer and sunk investments in mission-critical AI could be wasted.