At Chatterbox Labs, our patented AI Model Insights platform (which we call AIMI for short), enables organizations to ensure that their AI is operating in a responsible, ethical and trustworthy manner.
Today, we’re going to take a look at how one of our customers is making use of the flexible deployment and integration options that AIMI gives. Remember, AIMI is always deployed on your infrastructure, giving you full control over your data and AI models. You can access it via the browser based UI or integrate it into your custom applications and pipelines using our API & SDK.
The customer in question makes use of all these options without disrupting their existing infrastructure or existing AI assets. Their cloud vendor of choice is Microsoft Azure – as a company they run everything from here - and their MLOps tool of choice is MLflow (with versioning in GitHub).
In the following flow chart, you can see how AIMI is tightly coupled within their MLOps lifecycle, with new versions of their models pushed to GitHub, deployed to MLflow, Responsible AI checks executed with AIMI insights across all eight pillars and all results, assets and reports written back to MLflow:
In this scenario there is a full, end to end deployment of AIMI within the MLOps lifecycle.
Of course, you don’t have to use Microsoft Azure, GitHub & MLflow – each organization’s cloud, on-prem or hybrid environment is different. AIMI is flexible enough to fit into all of these.
If you’d like to find out more, please get in touch.