Explainable AI (XAI) – Auditing & Measuring AI Investments

  • Danny Coleman
  • November 1, 2019

With the plethora of AI Clouds, engines and platforms available to the enterprise everyone has an opinion on which is best to strategically lock into.

AutoML still appears to still be the flavour of the day, however Chatterbox Labs see a different dimension to AI. Explainable AI by 2023 will be pivotal to any enterprise seeking true and valid business outcomes to advance their AI endeavours (55% uptake stated).

The AI Cloud, engine or platform of choice is no longer about the number of algorithms it has (that only data scientists can truly experiment with) or the beautiful UI showing pretty graphs and charts for presentation purposes.

The AI landscape is changing in that enterprises are finally searching for validation, explainability and traceability.

Scepticism will never be far away unless regulated and governed environments can really explain the decision the machine made in a way that humans can interpret.

The question now being asked in the boardroom is why are we investing so much in AI when we still cannot measure business outcomes and the true financial upside of AI across our business.

Chatterbox Labs has spent the last 3.5 years addressing XAI (amassing patents and extensive novel research) by building a unique offering that works in tandem with existing investments.

Today we have XAI for text, mixed data and images and we’re working away to enlarge that offering by releasing XAI for video and speech within 2020.

Executives investing in any form of AI should be asking themselves:

  • How wan we evaluate our existing IP assets to assess true business impact and viability?
  • For new AI initiatives how can we set KPIs of what an AI model can explain in a business context?
  • Can we deliver business outcomes with AI that has meaning, purpose and longevity?

We would be delighted to share why we believe XAI is fundamental to any previous or future AI initiatives.

Back to blog

Get in Touch