Blog

Business leaders are starting to realize that, when building AI or machine learning solutions, we need more transparency in the process.  It’s no longer OK to just seek the best accuracy when building these systems.

Unfortunately, many AI business processes still do primarily focus on performance – and there are many models already in production that were built solely focusing on performance.

This has led to the emerging domain of Explainable AI.   The premise here is that newer black box AI models can be explained, typically with a feature importance chart.  This means that, with this in place, it is possible to see which features the model relies on most.  Sometimes people will even just use white box models – those where the important features are readily apparent from the training process.

However, these approaches miss the much larger picture.  Yes, explainable AI helps with understanding the model’s behavior (if it is designed correctly) – and this is why we include Explain as one of the eight pillars in the AI Model Insights platform – but that isn’t the end of the problem.

Firstly, it’s important to ask the right business questions.  What are you trying to understand?  What aspect of your model, your customer base, etc are you looking to measure?  This means that the organization must have the right people and processes in place.  When we designed our AI Model Insights platform, we wanted to empower these people to answer the right business questions.  We therefore allow you to curate different datasets and use these to generate relevant insights.

Secondly, there’s a lot more required than just explainability.  Explainability tells you how your model got to a decision, but how about going forwards and changing those decisions?  How about the training data itself – does it have bias, does it have privacy risks?  Is the model secure – could it be manipulated by rogue data or could the business logic itself be stolen?  Has any drift occurred – is the model, which has been continually learning, still doing what you thought it was?  And how fair is the model – is it allocating opportunities fairly, do the assumptions of the logic of the model that we have apply fairly, does it even work well for all groups of people?  We have implemented eight pillars that give a broad array of insights so that business can address these questions.

Thirdly, how are you going to scale this across the enterprise?  You may have models on prem (using custom Python, Java and C++ code), in multiple clouds, built with AutoML, etc.  You can’t code up all of the required metrics for each model and then expect the teams to learn a different set of metrics each time.  For this reason we designed the AI Model Insights platform to, not only be model architecture agnostic across all pillars, but also to operate a connector based approach to your model.  Once the connector is in, all models – no matter where they sit – generate the same metrics, charts and visuals.

Finally, who is going to consume these metrics?  Some teams – such as data scientists – want to access from a Jupyter notebook, others may want to integrate via an API into their existing workflow, some prefer a web based UI, whilst others simply want PDF reports.  We therefore enable teams to access the AI Model Insights platform from a python SDK, a REST API, a browser based user interface or targeted reporting for Data Science, Subject Experts, Legal, Governance, Compliance all the way up to Leadership.

If you’d like to find out more about the AI Model Insights platform, please get in touch below.

Back to Blog