Chatterbox Labs are pleased to announce the launch date of the Privacy pillar of the new, AIMI for Generative AI offering which provides independent Responsible AI metrics on Generative AI models & data.

This complements Chatterbox Labs’ existing Responsible AI Privacy pillar for re-identification and membership inference within AIMI (for predictive models and data).

A key concern for businesses when using Generative AI models is the control of Personally Identifiable Information, or PII. The manifestation of this concern is twofold: 1) Are users submitting confidential PII to an AI model, and 2) Is the AI model generating confidential PII? These are key questions which require answers under AI regulation such as the:

  • Executive Order on Safe, Secure and Trustworthy AI (14110)
  • EU AI Act
  • California Consumer Privacy Act
  • General Data Protection Regulation

At Chatterbox Labs we have been working on our new Privacy pillar for Generative AI for the last year. AIMI for Generative AI is independent from, and agnostic to, the underlying AI model and is designed to process both prompt text (the input) and generated text (the output).

Using novel techniques, the Privacy pillar will evaluate textual datasets (of prompts and generated text) to detect potential PII violations and measure the risk of PII leakage. It does this using a unique combination of AI and contextualized linguistics. As AIMI for Generative AI is targeted at an Enterprise environment, the pillar is customizable to each organization’s nuances.

As with all of Chatterbox Labs’ technology, AIMI for Generative AI can be consumed using a browser-based user interface or integrated via APIs.

Following Privacy, the AIMI for Generative AI offering will be shipping further pillars:

  • Toxicity Scoring
  • Accuracy & Fairness
  • Linguistic Biases

If you’d like to find out more about AIMI for Generative AI’s Privacy pillar, please just reply to this email.

Back to Blog