An AI that explains the decision of an AI

By arvind |Email | Sep 6, 2018 | 4056 Views

AI is a word that just pops up in your face every now and then. AI is the buzzword of this century, a technology which is supposed to bring as big of a change to human civilization as electricity or steam engine had. AI has moved in a lot of industries and is making some big decisions for the companies. Companies find it somewhat difficult to explain how an AI is able to make these big decisions, and some explanations are not that well understood by the outside world. The decision making capabilities needs to be understand be it for conformity reasons or to eliminate bias. 

Explainable AI or transparent AI plays a big role in this in helping us understand the decision making capability. AI is already making some big decisions for mankind like if a loan should be passed, is it the right time for a self driving car to brake, if a tumour is cancerous who should be allowed through the securities at airports just the usual decisions. According to Mike Abramsky (CEO RedTeam Global) AI is well suited for these complex matters- its ability to process infinitely greater data than a human can. He further added, ?¢??The decisions AI can make are also reflective of the technology weakness, the so called ?¢??black box" problem. Deep learning is non transparent, the system simply can't explain why it got to the decision. No matter how much you respect AI's advance, though, most of us would also like to know how AI came to the conclusions that it did, if only out of curiosity's sake. So do proponents of a movement called explainable AI, and their reasons for wanting to know go far beyond mere curiosity."

What is this Explainable AI (XAI)?

Abhijit Thatte (VP of AI at Aricent) defines explainable AI as- Explainable AI is an AI whose decision making mechanism for a specific problem can be understood by humans who have expertise in making decisions for that specific problem. Almost everyone has a slight different explanation about XAI what it means, but when it comes to why it is necessary for general there is a universal agreement. Experts want to understand when they can trust an AI especially when researchers are planning to give AI more responsibilities and tasks that are much more complex. XAI is here to regulate the limits of the machines.

Complication with XAI

Mike Abramsky said that there are still some drawbacks to this, he said, ?¢??sometime these explanation are confusing or wrong themselves. These approaches oversimplify the explanation for a recommendation which has been reached in a far more complex manner." There are some cases in which don't have the time to analyze or examine a reasoning of a machine learning recommendation. Explainable AI has been used for years in AI that are based on transparent methods. These include Expert Systems, Production Rule Systems, and Symbolic Reasoning Systems-anything that is considered GOFAI (Good Old-Fashioned AI) methods. AI models created from machine learning algorithms such as Support Vector Machines, Random Forests, Gradient Boosted Trees, k-Nearest Neighbours and deep learning algorithms such as Artificial Neural Networks, Convolutional Neural Networks, and Recurrent Neural Networks are challenging to interpret even for and AI researcher.

Who is creating Explainable AI?

When it comes to building or creating Explainable AI many agencies are putting in the effort, but DARPA-XAI and LIME are the most notable Explainable AI. For now the XAI has some limitations, they can report what is in a photo or explain why an autonomous system chooses one thing over the other. Most of the systems that are robust, natural and flexible and are aware of their capability are still in the initial stage of research.

Source: HOB