Trust, Ethics and Unintended Consequences of AI

Danny Coleman

February 15, 2021

While AI offer many positive benefits, they can also lead to significant unintended consequences. AI is at a tipping point. Without ethical guidelines, regulatory standards & data driven insights, how will individuals, organizations and society learn to trust AI systems? In particular, fairness and bias are “grey areas” within AI that have significant consequences. Bias […]

Read More
All Inclusive AI Governance

Danny Coleman

November 24, 2020

Enterprise AI is grappling with moving siloed data science projects to a centralised system of human centric processes and frameworks. Artificial Intelligence (AI) should be termed “All Inclusive” as without a Multi-Stakeholder Management approach with leadership buy in, AI stays siloed, sat within labs or quite simply fails, hence an industry wide success rate of […]

Read More
Trustworthy AI & Validation of Enterprise AI Model Insights (AIMI)

Danny Coleman

September 2, 2020

Trustworthy AI is the number priority for business leaders for very good reason. Despite all the advances made with AI in academic breakthroughs, experimentation, methods, model build, accuracy, precision, recall and feature importance enterprises are still seeking greater insights into their AI models whether still in development or live in a production environment. Recalling AI […]

Read More
Explainable AI & the boardroom conversation

Danny Coleman

June 19, 2020

AI is a somewhat underspecified term and can mean many things to different audiences. However, in business, boardroom execs today are still seeking answers to what AI will truly deliver to their business. Many have spent millions of dollars, yet few have seen significant real-world impact and ROI. One exec recently remarked "In the current […]

Read More
Implementing Explainable AI (XAI) & Proof of Value in 4 hours via Zoom

Danny Coleman

May 11, 2020

Today more than ever enterprises are seeking immediate value from AI investments. Most enterprises have made significant AI investments yet continue to struggle with real world implementation of “why did the machine make its decision” and next best action. In order to deliver real-world AI, the fundamental and critical phases of ideation, experimentation, engineering and […]

Read More
Explainable AI & adherence to impending global government AI regulations

Danny Coleman

May 5, 2020

When looking at the forecasted revenues of the AI market worldwide from 2018 to 2025 we see from $10 billion in 2018 to $126 billion in 2025. The business upside is significant, however with potential growth of this magnitude, risk and government regulation was inevitable. Governments around the world today see AI as a great enabler to […]

Read More
AI Validation & Risk Assessment

Danny Coleman

March 12, 2020

AI industry analysts state that AI projects and their real-world success metric is less than 8%. This is a staggeringly low number given the billions of investment and big bets being made. Unlike other traditional software implementations, AI has always been vague and it has been accepted that machines are correct in making their decisions. […]

Read More
Explainable AI (XAI) - Auditing & Measuring AI Investments

Danny Coleman

November 1, 2019

With the plethora of AI Clouds, engines and platforms available to the enterprise everyone has an opinion on which is best to strategically lock into. AutoML still appears to still be the flavour of the day, however Chatterbox Labs see a different dimension to AI. Explainable AI by 2023 will be pivotal to any enterprise […]

Read More
Explainable AI Product APIs

Danny Coleman

September 16, 2019

Chatterbox Labs has been working towards solving XAI for numerous years. Many companies are claiming to be making breakthroughs in the area of XAI, however, our view is that unless you’ve excellent academic credentials, world class research scientists & world class product engineers, teams resort to infusing a single academic algorithm into a product offering […]

Read More