Stuart Battersby
July 7, 2022
Artificial Intelligence (AI) is becoming prevalent across all business sectors. Whilst the big headlines focus on use cases like self-driving cars and digital assistants such as Alexa, most of the time AI technologies are used to automate human decision making in less flashy scenarios. Your credit card or mortgage application is likely reviewed by an […]
Stuart Battersby
December 11, 2020
Chatterbox Labs are proud to have collaborated with the World Economic Forum (WEF) team on the Responsible Use of Technology project, resulting in the WEF’s release of the Ethics by Design report. The Responsible Use of Technology initiative of the World Economic Forum brings together experts and practitioners from the private sector, government, civil society […]
Stuart Battersby
October 22, 2020
Assessing the fairness of an AI model is a critical, yet challenging task that should involve business process, human judgement, and transparent data on the behaviour of the AI model. When any AI model is built using personal data, it is important (and legally required in most jurisdictions) that is does not unfairly discriminate. This […]
Stuart Battersby
September 30, 2020
In a guest post on insideBIGDATA I wrote: “There is no doubt that AI is exploding across businesses, and it is not just with the moon shots that make news headlines. Due to the speed and scale at which AI can operate, it is being used across the critical operations and decision making in everyday […]
Stuart Battersby
July 9, 2020
AI models can achieve very high accuracy (this is particularly true with contemporary deep learning methods) and can churn through data at a much higher rate than is possible with humans. This has led to their deployment in decision making systems across various industries. In general, these systems are trained (that is, taught how to […]
Stuart Battersby
May 22, 2020
Are you ready? Governments and regulators now have well established regulations in place for managing data including the security and privacy implications associated with these. However, as AI is now being regularly deployed within organisations, countries around the world are looking at how it should be regulated. Some regulation builds upon (or is inherent within) […]
Stuart Battersby
April 14, 2020
Contemporary machine learning systems are different from traditional rules-based systems. With these traditional systems a series of rules were written that matched up with the desired operation of the system. In machine learning, the system learns how to make decisions from the data that is presented to it. Whilst this has many advantages, a very […]
Stuart Battersby
March 31, 2020
Explainability within Enterprise AI is critical, whether this is to comply with regulation such as the GDPR, for auditing your AI systems, for feeding back to customers, for getting buy-in from internal teams and boardrooms or actioning on the decisions made by the AI. This message is becoming very apparent, but how does this high-level […]
Stuart Battersby
January 27, 2020
Chatterbox Labs’ Explainable AI software works with any AI model in any AI system, which our enterprise customers use for: Validating continual AI model business relevance Auditing, tracing & explaining any AI model (text, image and mixed data) Exploiting & reinforcing existing AI assets Complying with global government AI regulation initiatives Conforming to a unified […]
Stuart Battersby
September 27, 2019
Chatterbox Labs' patented Explainable AI (XAI) product can explain any AI model. This is critical because it means that, rather than replacing all of your years of AI investments with a new explainable model (that may not achieve the performance that the existing system can), our XAI works hand in glove with your existing system […]