Artificial Intelligence

Don’t overlook independence in Responsible AI

Marketing

|

March 20, 2023

Chatterbox Labs’ CTO, Dr Stuart Battersby, authored the guest article “Don’t overlook independence in Responsible AI” on InsideBIGDATA.  From the article… The arrival of ChatGPT and other large language models (LLMs) has brought the notion of AI ethics to mainstream discussion.  This is good because it shines a light on the area of work which […]

Read more

Responsible & Independent Validation of AI Models

Marketing

|

November 8, 2021

Ethical, Fair and Trustworthy AI is a hot subject for very good reason. The unintended consequences of AI for individuals, societies and organizations are huge: Individuals Organizations Society Physical Safety Financial Performance National Security Privacy & Reputation Nonfinancial Performance Economic Stability Digital Safety Legal & Compliance Political Stability Financial Health Reputational Integrity Infrastructure Integrity Equity […]

Read more

Chatterbox Labs collaborate with the World Economic Forum on the Responsible Use of Technology

Stuart Battersby

|

December 11, 2020

Chatterbox Labs are proud to have collaborated with the World Economic Forum (WEF) team on the Responsible Use of Technology project, resulting in the WEF’s release of the Ethics by Design report. The Responsible Use of Technology initiative of the World Economic Forum brings together experts and practitioners from the private sector, government, civil society […]

Read more

Implementing Explainable AI (XAI) & Proof of Value in 4 hours via Zoom

Danny Coleman

|

May 11, 2020

Today more than ever enterprises are seeking immediate value from AI investments. Most enterprises have made significant AI investments yet continue to struggle with real world implementation of “why did the machine make its decision” and next best action. In order to deliver real-world AI, the fundamental and critical phases of ideation, experimentation, engineering and […]

Read more

Addressing bias in AI needs Explainability; you can’t fix what you can’t see

Stuart Battersby

|

April 14, 2020

Contemporary machine learning systems are different from traditional rules-based systems. With these traditional systems a series of rules were written that matched up with the desired operation of the system. In machine learning, the system learns how to make decisions from the data that is presented to it. Whilst this has many advantages, a very […]

Read more

AI Validation & Risk Assessment

Danny Coleman

|

March 12, 2020

AI industry analysts state that AI projects and their real-world success metric is less than 8%. This is a staggeringly low number given the billions of investment and big bets being made. Unlike other traditional software implementations, AI has always been vague and it has been accepted that machines are correct in making their decisions. […]

Read more

Tech Blog: Deploying Explainable AI APIs with existing AI assets & platforms

Stuart Battersby

|

September 27, 2019

Chatterbox Labs’ patented Explainable AI (XAI) product can explain any AI model. This is critical because it means that, rather than replacing all of your years of AI investments with a new explainable model (that may not achieve the performance that the existing system can), our XAI works hand in glove with your existing system […]

Read more
Get in touch