Blog

With the explosion of interest in generative AI, particularly with large language models (LLMs) in an enterprise context, one of the most interesting things is that the AI discussion now includes businesspeople who didn’t typically talk about AI, or had never even considered an enterprise use case for AI. 

Now they are.

As a technologist who has worked in AI for almost 20 years now, I’m holding many of these conversations with clients.

However, whilst there is much excitement about what is possible with generative AI, scratch beneath the surface a little more and you find that, often, the use cases that people discuss are most suited to (what we can call here) ‘traditional’ AI (typically powered by supervised machine learning).  It’s because of this that, along with the growth in generative AI, I am seeing significant (possibly even more) growth in the adoption of traditional AI. 

There are clear benefits to this: an enterprise is typically in control of the whole process, they can apply their own governance controls, they can easily make use of their own data, the huge infrastructure investments seen for LLMs aren’t needed, and more.

Of course, generative AI will settle down in the enterprise, and important & powerful use cases will emerge alongside those using a more traditional AI approach.

Importantly, with any type of AI, is that an enterprise can ensure that their AI is operating in an ethical, robust, secure and trustworthy manner.  This is not just about creating policies – that’s an important task – but quantitatively measuring AI models and validating them in a manner that is scalable across the enterprise and comparable across models.

With this in mind, there can’t be multiple platforms and metrics – there needs to be a unified enterprise approach (no matter where the models sit, no matter whether they’re generative or traditional AI, no matter whether they were externally purchased or developed internally, etc).

As Chatterbox Labs are over 13 years old, we’ve been addressing Responsible AI with our patented AI Model Insights platform (that we call AIMI for short) for a long time.  It’s deployed in some of the largest enterprise and government organizations in the world.  As it’s independent from the data, model build and ml ops technologies that you use, so it can produce independent validation your portfolio of AI models and AI data.

Over the coming months we’re introducing technologies to support the quantitative assessment of generative AI alongside the existing quantitative assessment of traditional AI.  This will all sit within one platform.  This way, no matter which type of AI you choose to use to address your enterprise use cases with, AIMI can validate them in one environment.  For generative AI this validation focusses on privacy, toxicity, accuracy, fairness and linguistic biases.

As always, our enterprise customers remain in control of their AI assets with AIMI deployed on their infrastructure.

If you’d like to find out more about AIMI and our AIMI 4 Gen AI offerings, please see our website (https://chatterbox.co) or get in touch.

Back to Blog