Chatterbox Labs' CTO, Dr Stuart Battersby, authored the guest article "Don't overlook independence in Responsible AI" on InsideBIGDATA.  From the article...

The arrival of ChatGPT and other large language models (LLMs) has brought the notion of AI ethics to mainstream discussion.  This is good because it shines a light on the area of work which has been tackling these issues for some time now.  This is the field of Responsible AI, and it doesn’t just apply to ChatGPT and LLMs, it applies to any application of AI or machine learning that can have an impact on people in the real world.  For example, AI models may be deciding whether to approve your loan application, progress you to the next round of job interviews, put you forward as a candidate for preventative healthcare or determine if you’re going to reoffend when on parole.

Whilst the field of Responsible AI is gaining traction in the enterprise (in part driven by imminent regulation such as the EU’s AI Act), there are issues with current approaches to implementing Responsible AI.  Possibly to due illiteracy in AI and data across large organizations, the task of Responsible AI is often thrown to the data science teams.  These teams are usually made up of scientists who are tasked with designing and building effective and accurate AI models (most often using machine learning techniques).

The key point here is that it’s not the right approach to task the teams (and by association, the technologies they use) that build the models, with the job of objectively evaluating these models.

Fields outside of AI have a long and effective history of requiring independence in audits.  As required by the Securities and Exchange Commission (SEC) in the United States, the auditor of a company’s finances must be fully independent from the company in question.  From the SEC: “Ensuring auditor independence is as important as ensuring that revenues and expenses are properly reported and classified.”

Read the full article here:

Back to Blog