Blog

Contemporary machine learning systems are different from traditional rules-based systems. With these traditional systems a series of rules were written that matched up with the desired operation of the system. In machine learning, the system learns how to make decisions from the data that is presented to it. Whilst this has many advantages, a very significant concern is that biases that are inherent in the provided data are learned and carried through into the deployed system that makes decisions on new, live data.

People are often unaware of bias

In an article on ZDNet the issue of bias is discussed. It’s noted that the issue rarely stems from the ill intentions of the designer of the algorithm, normally bias occurs as the algorithms are just learning from data. Very importantly, the article also notes that the people that design & deploy these systems are often themselves completely unaware of the bias that is inherently in them.

It is said that assessing bias should be included from the very beginning of the machine learning process (and then continually as the process evolves), rather than waiting for problems to occur later.

This is where explainability comes in. As we have seen, the people designing & deploying the systems are often unaware of the bias; they therefore need to be made aware of this and, as the system learns from the data, this explanation should be made in terms of the data.

Bias in text data

At the ZDNet article discusses, bias is often found in text. Take gender bias as an example. It’s not as simple to say that, because the gender field was removed as a feature from the model that there isn’t going to be bias by gender. Bias is often much more subtle and hidden. The text data holds information (such as gender pronouns) that will be learned and used for prediction. If the output variable of the machine learning system is biased by these pronouns then this bias will be unintentionally preserved in the predictions it makes.

Explainability of text-based machine learning predictions is extremely hard (which is why we’ve patented our explainable AI methods); these black box models often use very abstract feature representations and text represents language, which means that words don’t operate in isolation.

Explainability is critical to detect bias

Robust processes and software for explaining machine learning models, in particular those based on text, are critical components in addressing bias. Without a clear, transparent picture all the way through the process it is impossible to address bias – how can you address something that you can’t see?

This isn’t a one-time thing either – explainability should be used throughout the life cycle of the machine learning system in order to have complete transparency, and appropriate action and iteration taken.

Explainable AI software

Chatterbox Labs’ Explainable AI software can explain AI models built across text, image and tabular datasets, irrespective of the underlying AI method used. It works with your existing models (and models you’ll build in the future) whether they’re built in the cloud or custom built in house. If you’d like to find out more please watch this video, or get in touch.

 

Keep in touch by joining our mailing list here.

Back to Blog