Blog

Business both large and small were rocked last year when the General Data Protection Regulation came into force. This major update to the rules that govern how personal data, and the decisions made on that data, can be used were felt not just by corporations based in the European Union, but across the globe. Corporations operating within the finance domain in the United States may have seen similarities when this legislation arrived as for many years they have had similar regulation applied, in particular to the credit application process.

The GDPR got many companies operating in the Artificial Intelligence space very concerned, and rightly so. The GDPR not only contains provisions for the security, storage and accessibility of the data, but as Wired reported also:

"The ICO says individuals "have the right not to be subject to a decision" if it is automatic and it produces a significant effect on a person."

Wired go on:

"There are certain exceptions but generally people must be provided with an explanation of a decision made about them."

A similar right is given in the United States under the Equal Credit Opportunity Act in which creditors have a legal requirement to notify applicants of actions taken and must provide specific reasons.

This isn’t even restricted to the domains where there is a legal requirement for explanation. For example, Twitter is now expected to justify why posts have been removed or user accounts banned altogether.

Impact on Artificial Intelligence technologies

The problem then, is that most contemporary Artificial Intelligence and Machine Learning solutions used commercially are unable to satisfy these requirements. They learn from data and make automated decisions based on that learning. This directly contravenes the GDPR.

There is a lot of work in the academic space and the deep learning community in the area of Explainable AI however the most common approach in active research isn’t immediately going to solve the right to explain problem. This is because a lot of work in the deep learning community tries to explain the method, rather than the outcome. It’s focused on deep neural networks, and how we can explain what the learned mathematical representations are across a vast training dataset.

At Chatterbox Labs we’re doing something different. We are looking to provide human understandable decisions at the individual datapoint level (a phase called prediction) rather than across the whole cohort (the phase called training). When we look to explain the individual datapoint we can provide information on the process inline with GDPR and right to explain requirements. These explanations will be different for each individual and each datapoint, and this is a critical factor; the GDPR requires individuals are given an explanation about them.

The methods that underpin this approach are complex and are the result of years of research and development across multiple fields. You can find out more about our Explainable AI here. If you’ve got more questions, just get in touch and we’ll fill you in.

Back to Blog