Tech Blog: Deploying Explainable AI APIs with existing AI assets & platforms

  • Stuart Battersby
  • September 27, 2019

Chatterbox Labs’ patented Explainable AI (XAI) product can explain any AI model. This is critical because it means that, rather than replacing all of your years of AI investments with a new explainable model (that may not achieve the performance that the existing system can), our XAI works hand in glove with your existing system and deployment can take just hours.

We often get asked how our customers can do this so quickly. To explain, it’s worth discussing our technical architecture for a moment. All of our code runs on the Java Virtual Machine meaning that the only dependency is the JVM itself. There’s no need for complicated GPU hardware setups (everything is CPU bound) and there’s no need for complex software dependencies.

All AI systems which require explanation will have a prediction endpoint, be this a function (or method) or REST/HTTP endpoint. Our software architecture is designed to interrogate this endpoint. It takes as input the data to be explained, interrogates the prediction endpoint and returns the explanations ready for visualization in a custom software stack or standard, off the shelf visualization tools.

You have plenty of options for deployment. You can access our technology as a standard Java dependency or deploy it behind HTTP in a container. Both JVM and non-JVM AI systems can be explained by simply implementing a predict connector. This predict connector is trivial for any entry level programmer. For example, connecting to cloud ML services such as Google Cloud, Amazon Sagemaker or Azure AutoML requires circa 10 lines of code.

Using this approach, Enterprise AI system that have taken years to perfect but are still black box, can be opened up within just hours.

 

Back to blog

Get in Touch