Blog

AI models can achieve very high accuracy (this is particularly true with contemporary deep learning methods) and can churn through data at a much higher rate than is possible with humans. This has led to their deployment in decision making systems across various industries.
In general, these systems are trained (that is, taught how to make these decisions) using historical data from the organisation’s existing processes. This means that any unfair practises, or historical biases, that were in these processes may well be captured in the data and then fed into the automated decision making system.

A critical (and recurring task) is to evaluate an organisation’s AI models beyond just their standard performance metrics. Here we are going to look at assessing the fairness of an AI model.

To do this we will use the example of an automated loan approval system. This system takes financial applications as input and makes a recommendation as to whether the loan should be approved or not. It has been trained using data from the existing, manual process of loan approvals which took place over many preceding years.

Whilst any field in an AI model can be assessed for fairness, typically these are selected from a set of commonly identified sensitive fields (such as gender, race, religion, etc). When assessing the fairness of an AI model, business users will have valid questions surrounding these sensitive variables and the AI model, for example:

“Would this set of loan application rejections still be rejected if the applicants were male not female?”

These types of questions are very salient (and fundamental to the fairness of an AI model) yet tricky to answer without valid data that can be interpreted by a business user.

At Chatterbox Labs we enable business users to derive robust, yet interpretable fairness statistics on their AI model. We empower those business users to model wide ranging scenarios, whilst computing complex statistics under the hood. These scenarios are completely configurable to the use case at hand; in addition to the typical sensitive variables users are able to assess indirect biases that are unique to their use case.

These statistics are then presented to the user in a meaningful way – it is no good presenting another black box score (this time for fairness). With Chatterbox Labs, business users can identify not just where the bias exits in the automated system, but they are also able to determine how much of an impact this is having on the automated decisioning process. This gives users complete transparency into the fairness of their AI model.

If you’d like to find out more, please get in touch.

Back to Blog