Chatterbox Labs’ Explainable AI software works with any AI model in any AI system, which our enterprise customers use for:
- Validating continual AI model business relevance
- Auditing, tracing & explaining any AI model (text, image and mixed data)
- Exploiting & reinforcing existing AI assets
- Complying with global government AI regulation initiatives
- Conforming to a unified Enterprise AI strategy
Most scalable enterprise software platforms manage their autonomous components using Docker containers, and Chatterbox Labs’ Explainable AI fits seamlessly into this architecture.
As our Explainable AI is completely model agnostic, there is nothing custom built for the underlying machine learning model that it is explaining and no need to access the inside of this model. The system does not even need to know what kind of model is being used. This means that black box AI assets that are in production in your enterprise environment with high accuracy, can now be explained with our Explainable AI software without any need to modify or disrupt them.
Looking at this in a containerized workflow, the existing black box system will have your custom models deployed inside your container, exposing a predict endpoint. This is your existing architecture.
Our Explainable AI container exposes explain endpoints (for text, image and mixed data). All that is required is to deploy our Docker container, and point it to your container’s predict function. That’s it.