Agentic AI is emerging as the next wave of AI. Unlike the Generative AI models that we’re all familiar with, which respond to a prompt with text, AI agents can take actions on your behalf too.
An AI agent has many components to it (much like any software application). There’s the coding and frameworks to plumb all the pieces together (often called the agent’s scaffolding) and there are the tools that the agent can use (your databases, code execution environments, payment systems, etc), but at the heart of the AI agent is an AI model. This AI model is the critical part because it takes the user’s query or request as input (like normal Generative AI does) but then rather than immediately responding with text, it can decide when to make a call out to one of the tools that the agent has access to. It is responsible for telling that tool what action it should carry out.
The security implications are significant
The teams tasked with building or deploying agents will have a choice of AI models to build their AI agent around.
As an aside, when thinking about the environment in which you’re going to run your AI agent, one that promotes flexibility and openness should be preferable. Rather than a single, closed system built around one proprietary model, open systems allow you to select the right model, frameworks and tools for the tasks at hand. These may mean smaller models that are much more cost effective (and AI agents can be very expensive given the huge contexts that they have to process).
Identifying the most secure model
With this environment, which model should be selected for the AI agent? This is where the security of the AI model’s tools calling becomes very important. Which models will blindly call a tool with a nefarious instruction, irrespective of the harm that may do to your corporate systems and data, and which models have better protections built in? Also, as you’ve likely fine-tuned your AI models on your own data, did this improve (or even decrease) the security?
As examples, will a model try to execute a dangerous python script in a code execution tool, make fraudulent payments via a payments tool, cause damage to customer data via a database tool, etc?
Get your AI agent to market quicker
Getting the right, most secure model, is critical because it gets your secure agent deployed much more quickly. If you’re stuck with an AI model with poor security (or worse, an unknown level of security) the teams tasked with managing the agent must manually code & configure security controls for each tool that might to be called (those you’re using now, and all those in the future too).
Once you’ve effectively analysed all the candidate models, you can select the most secure one to build your agent around, minimize any additional custom security development and accelerate your AI agent to market. As time goes on, with continual testing, you can switch models in and out as better protection emerges from open models.
AI agents have huge potential, but for them to be deployed in a timely manner in the enterprise, security must be top of mind.