Blog

AI has been proliferating across enterprise and government organizations for years now.  Whilst the widespread launch of generative AI models (that is, AI models that generate content – typically text) has captured everyone’s attention, traditional AI (that is, AI that operates on set tasks) has been, and still is, growing in its use.

Both traditional and generative AI have serious risks that need to be identified and mitigated.  To do this requires quantitative metrics derived from testing AI models at scale.

We must make a quick side discussion here on hallucinations (these are times that a generative AI model generates false, inaccurate or misleading information).  Solving these in a manner that scales across all generative AI models and use cases is a moonshot.  It is not solved today and won’t be any time soon.

However, in the here and now there are real risks with AI models that can be addressed across both traditional and generative AI.  With our patented AIMI platform, we allow you to ensure that any AI model that you’re developing, acquiring or using is operating in a secure, robust, ethical and responsible manner.  We do this across our AIMI Responsible AI pillars:

Responsible AI

 

The quantitative metrics and risks identified by AIMI, should be accompanied by a human-in-the-loop internal process within your organization (often owned by technology risk, governance or compliance teams). 

This process is simple, straight forward, repeatable across all your AI use cases and can be implemented in just days or weeks.  Importantly, it makes use of independent Responsible AI metrics that your organization uses to determine whether AI models need stop development, require remediation or are ready for deployment into production:

Enterprise AI Risk Process

 

Back to Blog