The Evolution of Machine Learning: Explainable AI

By July 13, 2021Blog Posts
The Evolution of Machine Learning: Explainable AI

Once a scene in science fiction movies, artificial intelligence is a natural part of our daily lives today.  When we open a film from Netflix or follow a profile on Instagram and get movie or profile suggestions in just as we want, not to mention Amazon’s real-time pricing or Google search’s semantic ability.

Companies invest in their talents and technologies more and more every day to get the best performance, precision, and accuracy from their algorithms to provide the information that users are hoping to find.

According to Ritu Jyoti, the Vice President for AI Research at International Data Corporation (IDC), AI has become the top topic of all industries for its versatile application areas and resilience – and pandemic only magnified the effect. The same report mentions that the expected growth in revenues for AI solutions will reach $500 billion by 2024.

AI is known for its “black-box” nature, lack of transparency while offering almost limitless possibilities for developers and scientists who train AI decision systems based on a specific domain with providing no visibility or rationale behind it. This can be negligible when we talk about getting movie suggestions from Netflix, but if this system is used for disease diagnostic, or sentence a criminal subject, the crucial need for explaining AI is inevitable. Moreover, today’s industries and governments require these technologies to make sure that users or customers trust the AI-based systems when making decisions; they have the right choice with the help of AI. Therefore, finding an approach and developing algorithms that transform black-box systems to glass-box systems are significant. It will be possible for people to trust machines on their decisions by understanding how they think and why they choose what they choose. This is where explainable AI comes into play.

What is Explainable AI?

Explainable AI (XAI) is a suite of machine learning techniques that produce more explainable models. The main goal of XAI is to explain how algorithms come up with a decision and which factors affected their decision points and eventually ended up with that solution. It is an emerging field that sits at the intersection of different focuses: transparency, causality, bias, fairness, and safety.

XAI progress has affected by three accelerating factors:

  • Increasing ethical concerns. The growing need for transparency, required by laws like GDPR about how personal data is used.
  • As explained with examples in the introduction, before putting trust in machines’ decisions, humans need to be convinced. And that would be only possible with the explainability of AI systems.
  • Better human-machine synergy. Machines are part of our daily lives more than ever and enhanced our life with their wide range of functionality and increased intelligence. So, it is important to create an environment where both humans and machines are working together.

XAI was first introduced in 2004 by Van Let et al. to explain a game simulation that they developed to train the US army. Although it is a game, it is especially designed to better train the soldiers without losing the effectiveness of the education material. Full Spectrum Command consists of users (soldiers) and non-player characters that AI controls. After a mission is done, users can click on subordinates and ask questions within the system or review key aspects presented by the case. This study also significantly influenced the gaming industry and its technologies.

Understanding Explainable AI

A simple illustration is shared below to understand the main concept of XAI. There are few highlights worth mentioning. First, today’s section of the picture represents how the classical mechanism of ML works. The system gets training data, applies ML processes, and concludes or makes a recommendation according to how the model is trained. It is also important to notice that there is one-way interaction with the system’s user; this shows no explainability in this system. The user only sees the final decision made by the system.

On the other hand, there are explainable models and explainable interface layers in the XAI concept instead of the learned function. Explainable models are responsible for taking the task and offering a recommendation or an action; the interface is responsible for justifying the cause of why the system has made that decision. Then the user makes a final decision based on the explanation, so there is a two-way interaction between the system and the user. Explainability interfaces can benefit from Human-Computer Interaction (HCI) techniques to generate effective explanations.

Comparison of AI and XAI concepts proposed by DARPA

Figure 1: Comparison of AI and XAI concepts proposed by DARPA

Some Application Areas

Explainable AI has a wide variety of application areas, including healthcare, finance, insurance, law, etc. Here healthcare and law case studies are briefly shared to give a holistic overview of how XAI contributes to industries.

Explaining Medical Diagnosis

While describing XAI, it is mentioned that XAI is deployed to convert a black box into a white box. No matter how advanced the decision model is, there is always a need for a human in the loop for approving that model gives the right decision and handles unexpected scenarios. The medical domain is a suitable topic for describing such cases. Convolutional neural networks used to interpret medical diagnostics by using computer-aided diagnostic systems. Patient data collected through Magnetic resonance imaging (MRI) and computed tomography (CT) scans and existing diagnostics archives were processed, and the model trained to make accurate diagnostics by looking at the patient’s scans. In this way, the research team trained the model to identify disease patterns by looking at the existing atlas.

XAI in Legal

Explainable AI has great potential in legal applications. Courts can benefit from XAI for its pragmatic and transparent approach like judicial reasoning, bottom-up approach while making decisions, case-by-case considerations of delicate cases, and even stimulating the paperwork required for legal settings and audiences. It is also believed that XAI helps law to become more transparent and become independent from private actors by becoming more publicized. Judge is the main consumer of XAI algorithms recommendations and decisions. Taking reasonable explanations for sentences, the likelihood of that crime will occur, due processes, etc. require explanations from XAI models. As real-life cases, data collected, and model is trained by the collected data; common law of XAI will be created. This law can be used to compensate the explanation requirements by criminal, civil, administrative law settings, or it can be used by judges, juries, defendants, etc.

Final Thoughts

Explainable AI emerged as an answer to the increasing need to understand machine learning and AI better. Being able to comprehend AI will help us build better human-machine collaboration and contribute to a transparent approach where ethical concerns and trust issues can be resolved easily.

Author: Gülşah Keskin, Product Analyst, Sestek