Blog

Each of the main cloud vendors now have AI offerings that allow organizations to easily, build, train and deploy AI models.  This may be AWS Sagemaker, Azure ML, Google Cloud Vertex or one of the other many excellent offerings.   These tools sometimes have some a base level of additional tooling such as explainbility (often a SHAP or counterfactual based implementation) marketed as Responsible AI (aka Ethical AI, Trustworthy AI, etc).  The aim of these tools is to bring customers into the cloud to run more workloads meaning more underlying compute power is sold.

To senior executives at enterprise organizations that may say things like “we are all in on AWS” (or Google Cloud, Azure, etc) which may lead them to believe that they have their Responsible AI bases covered.  However, there is often a misunderstanding in how AI is used in the cloud.

When a data science team build AI models, they often experiment with the latest methods, write custom code and optimize their models.  They do this using their own software stack – this could range from a large in house implementation through to just python notebooks.  They thrive on the freedom to experiment and build models.

Whilst they may be running their model build and deployment in AWS (or equivalent) it’s likely that this is not using the AI specific tooling available from the cloud, but instead simply compute instances running their custom software.  When the models are moved to deployment, they may be deployed on compute instances, lambda functions, kubernetes, etc.  Whilst these are all “in AWS” they are not using any AWS AI specific tooling.

The result is that those Responsible AI tools likely don’t apply to the custom models the data science team have trained – even though it’s all “in AWS”.

This means that two problems have arisen:

  • Responsible AI tools are not the core focus of cloud vendors – selling compute power is
  • The Responsible AI tools from the cloud vendors are often simply not used given the custom nature of model build

The bases for Responsible AI are not covered.

The knock on effect is that one off, custom metrics are coded up for each model without a coherent repeatable strategy.

At Chatterbox Labs, we have taken a different approach.  We focus on empowering enterprise organizations to make critical, real world business decisions.

We recognize that data science teams should have the freedom to use whichever tooling for model build and deployment that fits with their needs, their corporate policies and their corporate guidelines.  Do you want to use a fully automated solution such as Sagemaker? Great.  Do you want to fully code everything yourself in your own stack? Great. Do you want to use a hybrid of two cloud vendors such as AWS and Google Cloud. Great.

The AI Model Insights platform sits as a layer on top of your AI models, no matter whether they’re custom built or auto built using cloud AI tooling.  The application is deployed by you into your cloud system, so you can still be “in AWS” (or equivalent).

It give a comprehensive range of Responsible AI insights.  These are not an add-on at the end, but our core focus.  The insights are generated across eight pillars:

  • Explain
  • Actions
  • Fairness
  • Vulnerabilities
  • Trace
  • Testing
  • Imitation
  • Privacy

    Importantly, they are the same no matter where the models sit. As Responsible AI grows out of just data science into stakeholders such as legal, governance, compliance and even up to leadership this repeatability is key for scale and success.

     

    Back to Blog