responsible ai

responsible ai

Unifying Responsible AI for both generative AI and traditional AI

Stuart Battersby

February 29, 2024

With the explosion of interest in generative AI, particularly with large language models (LLMs) in an enterprise context, one of the most interesting things is that the AI discussion now includes businesspeople who didn’t typically talk about AI, or had never even considered an enterprise use case for AI.  Now they are. As a technologist […]

Read More
Dr Battersby presenting at the Department of Defense AI Symposium

Marketing

February 20, 2024

For those that are attending the Department of Defense, CDAO Advantage Symposium in Washington DC we would like to invite you to see Chatterbox Labs' Chief Technology Officer, Dr Stuart Battersby, present our patented AI Model Insights platform and three years of collaborative work with the Department.   Stuart will be speaking in the Responsible […]

Read More
Deploying AIMI with MLflow; customer example

Marketing

February 7, 2024

At Chatterbox Labs, our patented AI Model Insights platform (which we call AIMI for short), enables organizations to ensure that their AI is operating in a responsible, ethical and trustworthy manner.   Today, we’re going to take a look at how one of our customers is making use of the flexible deployment and integration options that AIMI […]

Read More
Chatterbox Labs & VMware; Addressing Multicloud Responsible and Private AI Together

Marketing

October 20, 2023

Originally posted by VMware:  https://blogs.vmware.com/tap/2023/10/19/chatterbox-labs-vmware/  As Artificial Intelligence (AI) scales across Government & Enterprise, it is critical that appropriate guardrails are put in place to ensure it operates in a manner that is ethical, trustworthy, responsible, safe and secure. Whilst media focuses attention on headline AI use cases such as self-driving cars, public and private […]

Read More
Don’t overlook independence in Responsible AI

Marketing

March 20, 2023

Chatterbox Labs' CTO, Dr Stuart Battersby, authored the guest article "Don't overlook independence in Responsible AI" on InsideBIGDATA.  From the article... The arrival of ChatGPT and other large language models (LLMs) has brought the notion of AI ethics to mainstream discussion.  This is good because it shines a light on the area of work which […]

Read More
REAIM 23: How to Scale Responsible AI Presentation

Marketing

February 20, 2023

Our CTO, Dr Stuart Battersby, joined leaders from around the world in The Hague in February 2023 to present at the REAIM conference on Responsible AI.  You can see the talk below: How to Scale Responsible AI Across Government & Military Read any article about AI today and you’re likely to be reminded of the need […]

Read More
Validate Data Fairness and Privacy for Responsible AI

Stuart Battersby

November 24, 2022

What is unique about machine learning and AI compared to traditional rules-based systems?   Well, amongst various things, a key point is that AI learns from a training dataset instead of explicitly being coded with rules to follow. There is a huge focus on Responsible AI (aka Ethical or Trustworthy AI) – and rightly so.  However, […]

Read More
The Responsible AI Gap in Azure, AWS & Google Cloud

Marketing

July 27, 2022

Each of the main cloud vendors now have AI offerings that allow organizations to easily, build, train and deploy AI models.  This may be AWS Sagemaker, Azure ML, Google Cloud Vertex or one of the other many excellent offerings.   These tools sometimes have some a base level of additional tooling such as explainbility (often a […]

Read More
Is Governance Stifling AI?

Stuart Battersby

July 7, 2022

Artificial Intelligence (AI) is becoming prevalent across all business sectors.  Whilst the big headlines focus on use cases like self-driving cars and digital assistants such as Alexa, most of the time AI technologies are used to automate human decision making in less flashy scenarios.  Your credit card or mortgage application is likely reviewed by an […]

Read More