Blog

Assessing the fairness of an AI model is a critical, yet challenging task that should involve business process, human judgement, and transparent data on the behaviour of the AI model.

When any AI model is built using personal data, it is important (and legally required in most jurisdictions) that is does not unfairly discriminate. This protection from discrimination is usually indexed against sensitive or protected attributes (for example, gender, race, religion, etc.).

As a first pass then, whilst an organization may have these attributes in their dataset (for example, when building models in human resources the employer will know the gender of their employees), these fields are not typically included in the AI model. However, this alone is not enough to ensure there is no bias in the model.  Because the AI model is typically trained using historical data from a manual process (which may itself have been biased) then these biases may be captured in other areas of the data (known as proxy variables).

This therefore means that the business assumptions that you have of your AI model may well not be applied fairly across those sensitive attributes even though the attributes themselves are not included in the model. Let us take the example of a model in HR which automatically screens job applicants. It would be intuitive to expect that, when the model assesses education level, those with higher grades are more likely to be accepted to the next round. However, does this hold true to the same extent across the various sensitive attributes?

The Fairness pillar of the AI Model Insights platform automatically carries out this assessment of your AI model, and produces disparity metrics. This is presented in an easy to consume manner for business users and data scientists alike, starting at a high aggregate level with the ability to progressively drill down further to find out where, within both the model features and sensitive attributes, the disparity occurs and the extent of this disparity.

If you would like to see a brief demonstration of this in action, please see the following video:

Back to Blog