Blog

Are you ready?

Governments and regulators now have well established regulations in place for managing data including the security and privacy implications associated with these.  However, as AI is now being regularly deployed within organisations, countries around the world are looking at how it should be regulated.  Some regulation builds upon (or is inherent within) the existing data regulations (such as the EU’s GDPR & US Equal Credit Opportunity Act) whilst other approaches propose entirely new legislation (such as the proposed US Algorithmic Accountability Act).  Either way, there is no doubt that AI regulation is here to stay.  Are you ready for it?

What is the current state of play?

In some ways the EU has had the lead on this in terms of general legislation across industries. Within the GDPR, which was adopted in 2016 and become enforceable in 2018, people have the right to an explanation if an automated decision is made with their data. One of the important things to note is that this explanation must apply to each decision made by machines, not just aggregate explanations across whole groups. Whilst this is easy to produce using old rules-based methods, black box deep learning models are unable to provide this on their own. For more details on this see our post here.

In the United Kingdom, the Information Commissioner’s Office (the ICO) has been investigating how AI should be explained. Most recently this has produced a large publication called ‘Explaining decisions made with AI’ and, we can assume, is a pathway towards more formal regulation. Critically, it demonstrates that AI systems need to be explainable, and not only that, but the target audience needs to be considered. This target audience should not only be the Data Science teams, but the wider organisation. Compliance Week offers a nice summary of the document here.

"One of the most major shake ups from a regulatory perspective in the US is the proposed Algorithmic Accountability Act"

In the United States, the Equal Credit Opportunity Act is already in situ and regulating industries concerned with consumer credit, however one of the most major shake ups from a regulatory perspective in the US is the proposed Algorithmic Accountability Act. For a good summary see this article. This Act is seen as a first step to regulation of AI in the United States and focuses mainly of the potential for bias in AI systems. As the article notes:

“The Act would affect AI systems used not only by technology companies, but also by banks, insurance companies, retailers, and many other consumer businesses. Entities that develop, acquire, and/or utilize AI must be cognizant of the potential for biased decision-making and outcomes resulting from its use. Such entities should make efforts now to mitigate such potential biases and take corrective action when it is found.”

Similar paths are being taken in other areas of the world. For example, the Privacy Commissioner of Canada launched a consultation on AI regulation earlier this year.

An opinion piece by the Wharton School of the University of Pennsylvania sums everything up quite nicely:

“Just as we saw with information security, it is a matter of time before boards and CEOs will be held accountable for failures of machine decisions.”

How does Chatterbox Labs help?

Key to all the various regulatory or guideline developments is that the time to act is now.  Chatterbox Labs’ Explainable AI software platform can explain your existing AI assets and help ready your organisation for regulatory compliance today.  There are many reasons why organisations consume our technology, but the top five from a regulatory & compliance perspective are:

  1. Gain transparency into your AI. All regulation will require you to provide various explanations of your AI. However, most AI systems are already built and are black box. The solution is not to throw away your models and invest years in rebuilding these with old, transparent methods but instead to apply our layer of explainability on top. This enables you to explain and audit your existing AI systems.
  2. Start identifying bias. The Algorithmic Accountability Act places the burden on organisations to regularly address bias in their AI systems. However, the first step to this is understanding how your systems are behaving. As we have written before, it is impossible to fix bias if you are unable to see it happening

    "It is impossible to fix bias if you are unable to see it happening"

  3. Empowering business users. As the Wharton article states “AI audits should be performed by internal or external teams that are independent of the team that built the model. This is important to ensure that models are not audited in the same way that the data scientists who developed the model originally validated them”. With Chatterbox Labs’ software platform, business users (either internal or external) with no knowledge of the underlying machine learning can interrogate the model in a simple to use UI. Explanations are made in terms of the domain data that they are familiar with and can communicate to the wider organisation.
  4. Building trust. Again, looking at Wharton’s article “However effective a model, the inability to understand the factors driving a model’s recommendation can be a major deterrent to managerial and consumer trust in machine decisions.” Chatterbox Lab’s XAI platform addresses this challenge by explaining the factors of any AI model across text, numerical, categorical and image data.
  5. Trace AI decisions. In the context of the Algorithmic Accountability Act, Jones Day recommend that organisations “Develop AI tools that improve the traceability of AI decisions to provide real-time insights into how decisions are made”. Our Explainable AI software platform is ready to do this now.

Find out more

If you would like to find out more about our XAI software, please see this video or get in touch to schedule a 30-minute demonstration.

Back to Blog