The security industry loves its buzzwords, and this is always on full display at the annual RSA Conference event in San Francisco. Don’t believe us? Take a lap on the expo floor, and you’ll be bombarded with enough acronyms and over-the-top claims to send you straight to the nearest bar, which will likely serve specialty cocktails with names like The Great CASB and Firewall Fizz.
While the last couple of years have been all about AI — promising fully autonomous security operations centers (SOCs) and predicting that large language models (LLMs) could exploit zero-days with no human intervention — we expect the 2025 conference at the end of this month to be all about agentic AI.
As with all emerging tech, AI agents mean different things to different people. But broadly speaking, it’s a task-oriented AI that sits on top of a LLM and can analyze data and take action, at least somewhat independently of a human.
These agents are designed to learn and improve over time, serving as intelligent assistants to security analysts rather than mere script executors.
While some claims about agentic AI at RSAC will be grounded in reality, plenty will be pure hype, but either way: you won’t be able to avoid it.
From the Monday, April 28 keynote titled Security in the Age of Agentic AI, to vendors pitching how their AI agents are tackling tasks like malware analysis or defending against the latest agent-spawned threats, it’ll be everywhere.
We’ve already seen early glimpses at recent vendor conferences, including Microsoft Ignite and Google Cloud Next this year, where those two tech giants, along with other software vendors, have eagerly jumped aboard the agentic AI bandwagon.
“Last year, it was about building GenAI assistants into software to help the administration perform an individual task. For example, write a policy or explain a finding,” Gartner Research VP Neil MacDonald told The Register.
2025 is all about AI agents, where an orchestrated set of actions are completed autonomously by the agent
“2025 is all about AI agents – or agentic AI – where an orchestrated set of actions are completed autonomously by the agent,” he said. “It’s the difference between AI being a smart helper and responding to you versus agentic AI doing process- and outcome-specific things for you proactively and without the need for step-by-step guidance.”
While there are use cases for AI agents across nearly every industry, from payment processing to fast-food ordering, security examples include SOC-related tasks such as analyzing event logs, detecting unusual traffic patterns, and triaging alerts.
AI in the SOC
“From an analytical process within security, an AI agent can analyze vast amounts of data very quickly, so this takes away the need for an analyst sitting behind the keyboard to constantly be writing the logic of what they’re looking for,” Jason Lord, chief technology officer at Salesforce-focused security firm AutoRABIT told The Register.
Agents are also effective at spotting phishing attacks, which are increasingly being generated by AI themselves. For example, an agent can analyze an email and determine “this is a bad return address,” Lord said. “This is a fake logo. This is a URL that’s hosted in a .parks domain and has only been up for the last 12 hours. This is legitimately bad. Filter it to the security team.”
Additionally, agents – like earlier AI tools used in network analysis – can help detect threats and flag potentially malicious traffic across large enterprise environments.
“So when you’re looking for a security breach, and you’re looking at alerts that are firing all day long, what it can do is piece those together and say, ‘Hey, based on previous detections, these three functions just happened. And here’s all of the data that I’ve just pulled and provided to you to go make that human intelligence decision,'” Lord said.
But there are tasks that Lord says he wouldn’t want an agent to do. “I wouldn’t want it going and changing signature detection or logic,” he said. “I wouldn’t want it opening or closing firewall ports. I wouldn’t want it disabling or overriding security functions.”
Did we learn nothing from Skynet?
There’s also a lot that needs to happen to ensure that the agents are secured. Like any AI system, there’s risk: bad actors could poison the data used to train the model, or trick an agent into doing something it shouldn’t.
“I’m not going to go full Skynet,” Lord said, referring to the rogue AI from The Terminator films that wipes out humanity, “but an AI agent could perform functions that close sections of your network, or the entire network.
“It could create a denial of service, internally or externally, by not having resources available. It could unintentionally leak data, PII. It could override data protection classes and allow information to be sent outside the organization. It all goes back to: who’s looking at this, who’s going to be able to identify that there’s an agent doing something incorrectly?”
We’re too focused on the excitement, not the paranoia
The problem with all of the hype around agentic AI is that “we’re too focused on the excitement, not the paranoia.” Paul Davis, Field CISO at supply chain security shop JFrog, told The Register. “Just like GenAI for code, we need to have people who actually check it before they let it loose.”
Davis pointed to manufacturing, an industry he has first-hand experience with, as an example, where computers have to be shut down and brought back online in a certain sequence or else the systems won’t run properly.
“Unless you’ve taught the AI about that sequence — and most people don’t think about it — it’s going to break it,” he said.
Plus, there are privacy and security concerns tied to the data used to train these models: Where is the data stored, and for how long? Who has access to it? Can it be deleted, and if so, by whom?
“An AI agent without data is dead in the water,” Davis said. “Data regulations are going to go nuts.”
“So we’ve got the aspects of a thing that generates code, a thing that makes decisions, and runs on an infrastructure that deals with data,” he added. “Agentic AI is on the verge of being sort of mass population. It’s gonna get a bit scary, and I think we’re going to have a few bumps on the journey.”
One thing seems clear: it’s gonna be the talk of RSAC, so mark agentic AI on your Bingo cards, stat.
“I want a penny every time I hear AI or read AI at RSA so I can retire,” Lord said. Yes, you read that correctly: just a penny. And how many pennies does he expect that to net? “A house in Puerto Rico. Beach front.” ®