We're pleased to share a recent podcast on TechTarget’s Targeting AI series in which Chatterbox Labs’ CEO (Danny Coleman) and CTO (Dr Stuart Battersby) offer insight into why AI safety and security testing across the entire AI lifecycle is critical to scaling AI across the Enterprise.
The discussion clarifies the misconceptions of responsible AI, AI governance and AI safety & security.
The full podcast can be found here: The AI market does not understand AI safety | TechTarget
To clarify the differences in terminology in the market TechTarget report:
“According to Stuart Battersby, CTO of AI safety vendor Chatterbox Labs, responsible AI often refers to AI governance. When discussing responsible AI, vendors are looking at making sure that AI systems benefit users and do not cause harm that might lead to ethical or legal problems. "It might include policies and principles about how you treat AI," Battersby said on the Targeting AI podcast from Informa TechTarget. "You've got some solutions for AI governance, which typically are workflow things. It may decide who in the organization has a sign-off in the AI project or whether we have the right permissions to go forward with this project, with this AI use case." This is different from AI safety, which looks at whether the AI system produces harmful content, whether the controls and safety layers are adequate, or whether there is bias, Battersby continued. AI safety is assessing how the systems respond to inquiries, and sometimes involves the AI creator preventing the AI system from responding to certain inquiries.”
|
![]() |
Danny Coleman highlights:
"Unless these systems are proven to be safe, secure, robust and tested, how will we ever move more into production?" he said. "It's important that all stakeholders understand the role that they have to play in making sure AI systems are safe."
Key themes within the podcast are:
- Chatterbox Labs’ background in machine learning & NLP since 2011, and developing AutoML / synthetic data products lead us to the field of AI security and safety testing
- Battersby highlights the importance of safety & security testing AI software. AI poses significant enterprise risks
- Addressing AI risks throughout the AI lifecycle means more AI is put into production
- Responsible AI & AI governance frameworks, policies, model cards and workflow systems alone do not address the plethora of unseen AI risks
- Speed, scale and model improvements matter, however without safety & security testing there is an impasse for enterprise adoption
- Educating all stakeholders is pivotal to understanding wide ranging AI risks
- Cloud, hyperscalers & inference companies are perfectly positioned to remove the bottlenecks of AI risks by automating this process
- Guardrails are part of the solution but not a panacea on their own
- Constrained foundation models are not the answer to enterprise adoption
- Whilst US foundation model builders are leading the AI safety race, there’s still more work to be done to stay ahead
- Red teaming and guardrails help negate AI risks but do not overcome stringent compliance, legal, governance enterprise stress testing
- Releasing new foundation models every week is not the answer. Releasing models that are fit for purpose and risk free for the enterprise is the holy grail
- Regulation has been repealed in the US, yet overregulated in the EU; a happy medium is required to address enterprise AI risks
- Addressing enterprise harms is fundamentally different to foundation model builders, content safety filters, red teaming and guardrails
- Jailbreaks will always be found in any foundation model, without removing enterprise harms AI will struggle to be a multi-trillion dollar industry as stated by Gartner, IDC, McKinsey, etc
- Vendor market dominance is in play – having standards and real-world benchmarks (rather than just academic papers funded by the foundation model builders) would overcome enterprise bottlenecks
As the discussion shows, we are pro AI. If you would like to find out how our customers are getting more AI into production, quickly, please get in touch.