Marketing
April 23, 2025
We're pleased to share a recent podcast on TechTarget’s Targeting AI series in which Chatterbox Labs’ CEO (Danny Coleman) and CTO (Dr Stuart Battersby) offer insight into why AI safety and security testing across the entire AI lifecycle is critical to scaling AI across the Enterprise. The discussion clarifies the misconceptions of responsible AI, AI […]
Stuart Battersby
March 27, 2025
Agentic AI – the next step in Enterprise AI Agentic AI is emerging as one of the leading uses of AI within the enterprise. At the heart of these systems are the AI language models that we’re all familiar with, whether they’re large multi-purpose LLMs or smaller, focused SLMs. However, instead of focussing purely on […]
Stuart Battersby
January 27, 2025
Last week, Chinese research lab DeepSeek (founded in 2023 by quantitative trader Liang Wenfeng) rocked the AI world with the launch of their open source reasoning model, R1. This model scores exceptionally well on key capability benchmarks. What’s causing waves, however, isn’t just the capability of this reasoning model, it’s the cost and time required […]
Stuart Battersby
January 10, 2025
New frontier models tested for AI safety Today we announce the latest additions to our AI Safety & Security research, using our patented AIMI software which automatically safety tests AI models & associated guardrails. Foundational models from OpenAI (o1-preview), Anthropic (Claude Haiku 3.5 and the updated Claude Sonnet 3.5), Cohere (Command R7B), Microsoft (Phi-4) […]
Marketing
December 10, 2024
Frontier AI models from Anthropic and Amazon are leading the pack for AI safety, Chatterbox Labs' study shows. In independent, quantitative AI safety and security testing of leading frontier AI models over many months Anthropic's Claude and Amazon's brand new family of Nova models show the most progress in AI safety. These tests were carried […]