Explainability within Enterprise AI is critical, whether this is to comply with regulation such as the GDPR, for auditing your AI systems, for feeding back to customers, for getting buy-in from internal teams and boardrooms or actioning on the decisions made by the AI. This message is becoming very apparent, but how does this high-level goal translate into action within an Enterprise environment?

Let’s focus here on one decision that needs to be made: should explainability be tied to the model build process? Let’s also frame this in a typical Enterprise environment in which:

  • Many years and millions of dollars have already been invested in existing AI assets
  • There are many different systems used across the organization, some cloud vendors and lots of custom-built stacks (think python & TensorFlow for example)
  • High accuracy has been achieved by continual learning over the years, particularly with deep learning models that are black box in nature
  • Moving forwards, enterprise teams still want flexibility in the tools they use for model build (that is, no vendor lock-in)

There are approaches that tie the model build and explainability together, essentially making a new explainable machine learning model. The problem with this approach outside of the lab and in the enterprise environment is that it restricts all those points above. Existing investments in AI would need to be discarded, data ported into the new system and new, explainable models need to be trained up (which are unlikely to achieve the accuracy of the deep learning system that has been learning for years). Future model build (if explainability is to be achieved) needs to all take place in this new, explainable model build system. This approach is very challenging for an enterprise to adopt. Many have tried without success.

The other option is to work with the AI assets that are already in situ (and newly created AI assets that will be used in the future). This is technically more challenging, but much more rewarding for the enterprise. It is the approach that Chatterbox Labs’ Explainable AI product takes.

As this approach is working with the AI assets in-situ it must have a robust mechanism for connecting to these assets. Our Explainable AI connects to the major cloud vendors (Microsoft, IBM, AWS & Google Cloud) but also custom-built models using REST/JSON or gRPC.

It must also be flexible enough to deal with different underlying AI model implementations. Our Explainable AI product needs no knowledge of the underlying model that is powering your AI system. The only requirement is that the prediction function returns a score (and this is a standard principle in machine learning).

It has to scale to deal with the complex data types that an organisation has. This isn’t just restricted to standard tabular data (think numerical and categorical), but text and image data. Text, in particular, makes up a significant chunk of the enterprise machine learning workload. Our Explainable AI works across all these enterprise data types, ensuring that maximum coverage is achieved across enterprise use cases.

Now that the system is flexible enough to work from outside the underlying machine learning, it also opens the audience up to business users (those that know their domain and data but aren’t the data scientists). Our Explainable AI offers a consistent, easy to use interface that allows business users to interrogate and explain AI models that are in their organisation.

Explainability is not just about novel research; yes, this is an important part (and why we file patents on our novel research), but even more critical is building an enterprise software product that fits within the constraints and requirements of an enterprise organization.

Back to Blog