Blog

Artificial Intelligence (AI) is becoming prevalent across all business sectors.  Whilst the big headlines focus on use cases like self-driving cars and digital assistants such as Alexa, most of the time AI technologies are used to automate human decision making in less flashy scenarios.  Your credit card or mortgage application is likely reviewed by an AI model, as is your job application.  Your healthcare provider is likely assessing how risky you are by analyzing your heath records with an AI system and your loyalty card data is being processed with AI by retailers to determine the best offer for you.

Whilst using AI in these scenarios brings speed, scale and opportunities that are otherwise unreachable, there is scope for these systems to encapsulate (often unknown) biases, privacy and security concerns.  This is due to the fact that, unlike traditional rules based systems – in which subject experts write down a list of rules – AI learns from historical data (which may have been collected from some historically biased human process).  The software has a high level of autonomy in this learning process as nobody explicitly writes a series of rules.

A lot of discussion in this area is focused on what is known as black box AI models (typically neural networks) which are very hard to understand.  However, this discussion tends to focus on explaining a decision made by an AI model which, alone, misses the bigger picture of wider concerns; even transparent AI models can be biased and vulnerable for example.

Regulators are picking up on these problems.  There is AI specific legislation such as the EU’s AI Act and the USA’s Algorithmic Accountability Act (and California’s CCPA) and existing legislation in certain domains, such as the USA’s Equal Credit Opportunity Act applies to AI.  This means that, whilst ethical and responsible AI concerns should be addressed because it is the right thing to do, they should also be addressed because there’s an impending regulatory requirement to do so.

All of this necessarily means that there must be more oversight and control (both with organizational and technical checks on AI development).  But does this mean that, with all of these extra checks, AI development and progress will be stifled?  To answer this we must look at how things have been done in the past.

There has been a knowledge gap in organizations when it comes to AI.  Many people in typical business functions simply have not understood what AI is or how it works.  It has been pushed to the data science teams, who have been solely responsible for it.  These teams (who are excellent at what they do) train very accurate AI models in the lab based on historical data;  they iterate with the cutting edge of new methods using vast arrays of servers and GPUs to produce the best model.  Without other areas of the business interfering they’ve had freedom to experiment and innovate.

But things are changing.  As AI becomes mainstream, more stakeholders in the organization are becoming involved.  Subject Matter Experts, Legal, Governance, Compliance, Privacy are all becoming involved in the process.  They’re bringing new perspectives on AI development, they’re introducing guard rails and check points, they’re validating the appropriateness of the use case and they’re ensuring the ethical use of data and AI.  These checks require independence from the teams and technologies used in the development process.

 

A new business process for AI

A new business process for AI

And so, to answer our question.  Is all of this governance stifling AI?  No, it is not.  Yes, it is changing the process – but it is ensuring that more AI is adopted across the organization.

You see, in the old model there was a lot of AI and innovation in the lab.  However, when put forward to the business to move it out of lab and into production things fell down.  It was simply too risky, especially as stakeholders from the wider business had not been involved and didn’t understand it.  This means that the AI sat on the shelf or remained as an interesting experiment in the lab.  Those organizations that did push models straight into production without fully understanding their bounds of operation are now having to retrospectively inspect these models for ethical and regulatory concerns.

With full guard rails in place, and with buy in from stakeholders across the organization, more AI will be adopted and put into production.  The benefits of this will be recognized in real business value, which in turn will lead to more investment in AI.  The process will be different, and there will be more controls and checkpoints, but this should be seen a sign of AI development maturing not innovation being stifled.

Under the new model, the future is good for AI.

Back to Blog