Blog

Latest News

Managing AI Risks. Multiple Stakeholders Need Access to the Right Data and Insights

Stuart Battersby

September 30, 2020

In a guest post on insideBIGDATA I wrote: “There is no doubt that AI is exploding across businesses, and it is not just with the moon shots that make news headlines. Due to the speed and scale at which AI can operate, it is being used across the critical operations and decision making in everyday […]

Read More
Trustworthy AI & Validation of Enterprise AI Model Insights (AIMI)

Danny Coleman

September 2, 2020

Trustworthy AI is the number priority for business leaders for very good reason. Despite all the advances made with AI in academic breakthroughs, experimentation, methods, model build, accuracy, precision, recall and feature importance enterprises are still seeking greater insights into their AI models whether still in development or live in a production environment. Recalling AI […]

Read More
Assessing the Fairness of an AI model

Stuart Battersby

July 9, 2020

AI models can achieve very high accuracy (this is particularly true with contemporary deep learning methods) and can churn through data at a much higher rate than is possible with humans. This has led to their deployment in decision making systems across various industries. In general, these systems are trained (that is, taught how to […]

Read More
Explainable AI & the boardroom conversation

Danny Coleman

June 19, 2020

AI is a somewhat underspecified term and can mean many things to different audiences. However, in business, boardroom execs today are still seeking answers to what AI will truly deliver to their business. Many have spent millions of dollars, yet few have seen significant real-world impact and ROI. One exec recently remarked "In the current […]

Read More
AI regulation is coming: Are you ready?

Stuart Battersby

May 22, 2020

Are you ready? Governments and regulators now have well established regulations in place for managing data including the security and privacy implications associated with these.  However, as AI is now being regularly deployed within organisations, countries around the world are looking at how it should be regulated.  Some regulation builds upon (or is inherent within) […]

Read More
Explainable AI Methods (XAI) & Why it takes years to create novel XAI IP fit for purpose

Henrique Nunes

May 15, 2020

XAI today is the holy grail of AI and rightly so, the opportunity of unlocking the black box and providing explanations, insights and next best actions will have an impact across every industry. This is just the beginning… BTW - Artificial General Intelligence known as AGI is still in the dream phase and a long […]

Read More
Implementing Explainable AI (XAI) & Proof of Value in 4 hours via Zoom

Danny Coleman

May 11, 2020

Today more than ever enterprises are seeking immediate value from AI investments. Most enterprises have made significant AI investments yet continue to struggle with real world implementation of “why did the machine make its decision” and next best action. In order to deliver real-world AI, the fundamental and critical phases of ideation, experimentation, engineering and […]

Read More
Explainable AI & adherence to impending global government AI regulations

Danny Coleman

May 5, 2020

When looking at the forecasted revenues of the AI market worldwide from 2018 to 2025 we see from $10 billion in 2018 to $126 billion in 2025. The business upside is significant, however with potential growth of this magnitude, risk and government regulation was inevitable. Governments around the world today see AI as a great enabler to […]

Read More
Addressing bias in AI needs Explainability; you can’t fix what you can’t see

Stuart Battersby

April 14, 2020

Contemporary machine learning systems are different from traditional rules-based systems. With these traditional systems a series of rules were written that matched up with the desired operation of the system. In machine learning, the system learns how to make decisions from the data that is presented to it. Whilst this has many advantages, a very […]

Read More
Enterprise XAI: Ensuring success within an Enterprise environment

Stuart Battersby

March 31, 2020

Explainability within Enterprise AI is critical, whether this is to comply with regulation such as the GDPR, for auditing your AI systems, for feeding back to customers, for getting buy-in from internal teams and boardrooms or actioning on the decisions made by the AI. This message is becoming very apparent, but how does this high-level […]

Read More