Blog Details

TURNING THE AI BLACK BOX INTO A GLASS BOX

One of the biggest difficulties with Artificial Intelligence systems is that the internal workings are often unknown to even the coders themselves. We know what we put into the AI and what comes out of it, but how AI decides on the output is often a mystery, this phenomenon is known as the black box problem.

How can this be? Well, different types of machine learning algorithms have their own complexities. For example, deep neural networks use countless neurons with interconnections that make it almost impossible to ascertain how these predictions are being made. Whereas support vector machine algorithms process variables by finding geometric patterns in higher dimensional spaces which humans can’t visualize.

As a result of this black box phenomenon, when AI makes a mistake humans find it difficult to identify the source of the error. The error could derive from the AI encountering something new, but due to the infinite number of permutations, it is difficult to determine what that new element is. Or that there are biases in the data that are causing these problems. Or even that it has focused on a pattern that it shouldn’t. For example, an AI designed to identify COVID-19 through lung X-rays was trained using data from a number of different hospitals which differ in the way they present X-rays. The AI began to focus on the placement of the letter R on the X-ray (which is used to help radiologists orient themselves) rather than on the characteristic differences used to differentiate a person with and without COVID-19.

The emerging field of Explainable AI (‘XAI’) focuses on developing techniques and algorithms that provide human-understandable explanations for AI decisions. The benefits provided by XAI include the design of transparent models, validation against bias, knowledge discovery and better performance. XAI uses traditional data science methods or bigger neural nets to attain these explanations. It can then present these explanations in a number of different ways, for example, in a textual, visual or mathematical explanation.

smartR AI strives to make AI explainable, reproducible and traceable AI. Code Tracking (displays the code snippets or commands executed by the model to produce a specific result and gives users insight into the technical process) and Data and Process Traceability (allows users to trace how a chart was created or which tables and fields were used in a calculation, supporting data analysis and decision-making) are used to create human-friendly explanations. Thus, ensuring that when companies invest in smartR AI they are investing in trustworthy AI solutions.

While XAI is certainly an important technological advancement that could help us better understand errors made by AI programs, it is not just the dangers created by the errors that pose a problem. The legal implications these dangers raise when it comes to accountability are also a big issue. If a patient were to be misdiagnosed due to an AI diagnostic program, then who should be held accountable? The doctor that relied on the service? The programmers? The AI itself?

Holding any of these bodies solely to account could be problematic. While attributing blame to the doctor may encourage caution when using AI diagnostic services and prevent over-reliance, it would also discourage professionals from using the service, particularly without extensive training. On the other hand, holding the programmers to account could encourage a higher level of responsibility when creating these programs, but it would likely also stifle innovation. Not to mention that under the current legal framework, it would be very difficult to establish causality and foreseeability between the programmer’s actions and the output of the AI. An alternative suggestion would be to give the AI program limited legal personality, though this has been heavily criticized as impracticable by many scholars in the legal sector.

It is clear that the black box problem and issues in accountability make it difficult for AI to be implemented safely in high-stakes scenarios. XAI is certainly making progress in reducing the risks involved, although as it is such a new field there are problems to be resolved in the accuracy and quality of the explanations provided amongst other issues. As the legal and technical fields develop new solutions, it is important that we recognize the dangers posed by the black box problem and ensure that AI is made to be as explainable as possible.

 

Written by Celene Sandiford, smartR AI

Popular Category

Popular Category

No posts found!