Explainable AI Methods (XAI) & Why it takes years to create novel XAI IP fit for purpose

  • Henrique Nunes
  • May 15, 2020

XAI today is the holy grail of AI and rightly so, the opportunity of unlocking the black box and providing explanations, insights and next best actions will have an impact across every industry. This is just the beginning…

BTW – Artificial General Intelligence known as AGI is still in the dream phase and a long way off. So, let’s deal with the here and now and outline how to draw down value from all AI investments.

There are 4 camps for XAI within the AI community today and while all these approaches have merit, the question is now being asked which camp derives immediate value to a business and truly scales.

  1. Internal research groups figuring out the inner workings of an AI model (say Neural Network) and understanding the intricacies of the layers of the NN.
  2. Open Source communities creating software for others to use. Think SHAP, LIME, Anchor and TCAV to name a few.
  3. New AI companies using variants of SHAP, LIME, Anchor and TCAV approaches with a shiny new UI that are limited in the types of data they work with (mainly tabular).
  4. Few & established AI companies focused on solving XAI with novel IP to derive immediate ROI at scale in an enterprise or OEM arrangement.

Let’s look at each of these:

  1. Whilst interesting from a science perspective (novel and an interesting breakthrough) it is narrow in its business application and to date has been used in social networks and credit decisioning, mainly due to regulation. It will take many man years of working with bespoke AI methods to understand the inner workings of every type of AI model. This research is still in its infancy.
  2. SHAP, LIME, Anchor and TCAV has seen traction with internal data science groups applying the methods across different use cases. The reality of these approaches is that they are limited, cumbersome and don’t scale. Here are a few salient points of “the why” and what to look out for:
    • Need access to the training data – biggest showstopper in AI today
    • All of these methods struggle with text which is a high proportion of AI use cases
    • Require a lot of compute time – high dimension feature spaces (e.g. NLP problems)
    • Not human interpretable and actionable
    • Instability of the explanations: explanations of two very close points could vary greatly
    • Over or under attribution of interactive features
    • Ignore interactions between the input features
    • TCAV is difficult to apply to NLP problems
  3. As per point 2, adding a shiny new UI to methods that are limited does not cut it. In addition, tabular methods will solve less than 15% of AI market use cases today.
  4. Work with Chatterbox Labs an established AI company that has model, platform, and industry agnostic XAI. We believe deploying real world XAI not just for data scientists is critical to scale and mass adoption across an enterprise. Ease of use, quality of explanations, insights and next best of actions will determine immediate ROI. Here are our differentiators:
    • XAI for Text (only one in the world) Numerical, Categorical and Image data
    • XAI without the need for Training Data
    • Minimal Compute Time
    • Human Interpretable Explanations & Insights
    • Next Best Actions
    • Unlimited Scale
    • Enterprise XAI Software deployed in minutes
    • OEM ready Enterprise XAI software

The important question to ask is: How many of the AI investments we have made truly derive value?

We would love nothing more than to show how to derive value from any AI model regardless of where it sits and how it was built.

Back to blog

Get in Touch