Unifying Responsible AI for both generative AI and traditional AI

Stuart Battersby

February 29, 2024

With the explosion of interest in generative AI, particularly with large language models (LLMs) in an enterprise context, one of the most interesting things is that the AI discussion now includes businesspeople who didn’t typically talk about AI, or had never even considered an enterprise use case for AI.  Now they are. As a technologist […]

Read More
Validate Data Fairness and Privacy for Responsible AI

Stuart Battersby

November 24, 2022

What is unique about machine learning and AI compared to traditional rules-based systems?   Well, amongst various things, a key point is that AI learns from a training dataset instead of explicitly being coded with rules to follow. There is a huge focus on Responsible AI (aka Ethical or Trustworthy AI) – and rightly so.  However, […]

Read More
Explainable AI is important – but alone it is not the solution

Stuart Battersby

September 16, 2022

Business leaders are starting to realize that, when building AI or machine learning solutions, we need more transparency in the process.  It’s no longer OK to just seek the best accuracy when building these systems. Unfortunately, many AI business processes still do primarily focus on performance – and there are many models already in production […]

Read More
Is Governance Stifling AI?

Stuart Battersby

July 7, 2022

Artificial Intelligence (AI) is becoming prevalent across all business sectors.  Whilst the big headlines focus on use cases like self-driving cars and digital assistants such as Alexa, most of the time AI technologies are used to automate human decision making in less flashy scenarios.  Your credit card or mortgage application is likely reviewed by an […]

Read More
Chatterbox Labs collaborate with the World Economic Forum on the Responsible Use of Technology

Stuart Battersby

December 11, 2020

Chatterbox Labs are proud to have collaborated with the World Economic Forum (WEF) team on the Responsible Use of Technology project, resulting in the WEF’s release of the Ethics by Design report. The Responsible Use of Technology initiative of the World Economic Forum brings together experts and practitioners from the private sector, government, civil society […]

Read More
Sensitive attributes & hidden bias; assessing the fairness of your AI model needs transparent data

Stuart Battersby

October 22, 2020

Assessing the fairness of an AI model is a critical, yet challenging task that should involve business process, human judgement, and transparent data on the behaviour of the AI model. When any AI model is built using personal data, it is important (and legally required in most jurisdictions) that is does not unfairly discriminate. This […]

Read More
Managing AI Risks. Multiple Stakeholders Need Access to the Right Data and Insights

Stuart Battersby

September 30, 2020

In a guest post on insideBIGDATA I wrote: “There is no doubt that AI is exploding across businesses, and it is not just with the moon shots that make news headlines. Due to the speed and scale at which AI can operate, it is being used across the critical operations and decision making in everyday […]

Read More
Assessing the Fairness of an AI model

Stuart Battersby

July 9, 2020

AI models can achieve very high accuracy (this is particularly true with contemporary deep learning methods) and can churn through data at a much higher rate than is possible with humans. This has led to their deployment in decision making systems across various industries. In general, these systems are trained (that is, taught how to […]

Read More
AI regulation is coming: Are you ready?

Stuart Battersby

May 22, 2020

Are you ready? Governments and regulators now have well established regulations in place for managing data including the security and privacy implications associated with these.  However, as AI is now being regularly deployed within organisations, countries around the world are looking at how it should be regulated.  Some regulation builds upon (or is inherent within) […]

Read More
Addressing bias in AI needs Explainability; you can’t fix what you can’t see

Stuart Battersby

April 14, 2020

Contemporary machine learning systems are different from traditional rules-based systems. With these traditional systems a series of rules were written that matched up with the desired operation of the system. In machine learning, the system learns how to make decisions from the data that is presented to it. Whilst this has many advantages, a very […]

Read More