AI engineers should take a lesson from the early days of cybersecurity and bake safety and security into their models during development, rather than trying to bolt it on after the fact, according to former NSA boss Mike Rogers.
“So when we created this hyper-connected, highly networked world, which we all live in, which data is so critical, we did not consider defensibility, redundancy, and resilience as core design characteristics,” said Rogers, a retired US Navy admiral who helmed both the National Security Agency and US Cyber Command between 2014 and 2018.
Rogers was speaking on a panel about AI and national security at the Vanderbilt Summit on Modern Conflict and Emerging Threats.
“Fifty years later, we find ourselves in a very different environment in which we created something in which subsequent concerns have come to a much higher level, and yet we’re somewhat hindered by the fact we just didn’t build it into the system.”
We’ve already seen plenty of examples of what could go wrong with insecure models. The potential for harm spans everything from leaking sensitive data to hallucinating — which is bad enough if the models are being used to generate code but can have life-threatening consequences in some sectors, like healthcare.
LLMs also regularly demonstrate biases based on skin color and gender, and we’ve seen these play out in hiring and housing decisions — just think what they could mean for something like the UK justice department’s so-called “murder-prediction tool.”
Rogers’ point: It’s better to plan for and mitigate these flaws now than trying to fix them after the fact.
Planning for the worst is a well-worn trope from the security industry, both as developers claim that security is a fundamental aspect of their software development process — and later when defenders lament these products’ holes, costs, and the general clunkiness of trying to add security as an afterthought.
Built in v. bolted on
Jen Easterly championed this issue as the US Cybersecurity and Infrastructure Security director during the Biden administration, wrangling more than 150 technology vendors into signing CISA’s Secure By Design Pledge with its voluntary guidelines to incentivize secure development practices.
She also spoke out frequently at industry events about software suppliers who ship buggy, insecure code, and at one accused these vendors of “opening the doors for villains to attack their victims.”
While the Biden administration and some lawmakers have floated making tech companies liable for flaws in their products, the Trump administration has generally favored deregulation in the tech sector. That’s doubly true for AI regulation: On his first day back in the Oval Office, Trump tore up Biden’s executive order on AI safety — formally scrapping EO 14110, which had called for guardrails around AI development and deployment.
In all likelihood, the responsibility will fall to developers. And while it may be too late for cybersecurity, Rogers says it’s not too late for AI.
As we’re looking at AI: What are the core characteristics that we want to be mindful of from the get go?
“As we’re looking at AI: What are the core characteristics that we want to be mindful of from the get-go? Because one of the takeaways from the implications of what we did in cyber — it cost a fortune to go back later and try to bolt stuff back on from a defensive standpoint,” Rogers said. “If we had built that into the model from the get-go, we’d be in a very different place.”
This is particularly evident in the application of AI for national security, he added, noting that often there’s an inherent disconnect between how developers believe their products should be used and how the customer — in this case, the Pentagon — plans to use the technology.
“The way we define national security right now is way too narrow,” Rogers said. “The worlds of technology, economic security, and national security are now all intertwined. You just see that in government policy. You see that in the way the world is reacting. And it’s interesting then to see how these different cultures are going to work together.”
Remember Project Maven?
To highlight the risks of insufficient planning around AI and national security, Rogers pointed to Project Maven a 2017 Pentagon program that used Google’s AI to analyze vast amounts of drone surveillance footage.
While Google execs supported the program, employees vociferously opposed the partnership with the Pentagon and the use of the tech giant’s AI for warfighting. Google ultimately decided not to renew its contract for Project Maven after it expired in 2019.
The Project Maven fallout, Rogers suggests, is a case study in how failing to align technical design with real-world deployment risks can lead to ethical blowback, fractured partnerships, and, potentially, unsafe systems.
“Maven started with the DoD asking the following question: We’re going to apply DoD weapons, armed forces against individuals in locations around the world. That is a given, that is an element of what the DoD does. We believe that technology can help us do that in a more precise manner in which much fewer innocent individuals are killed or harmed, much less physical damage is done,” he recalled
The message to Google, Rogers remembered, was “help us in the precise application of force. We’re not asking you to tell us, should we apply force or not? That’s a decision that we are going to make. We’re asking you, once we make that decision, help us to do it more precisely.”
He recalled, “You saw these two cultures looking at the same problem totally differently,” Rogers recalled.
“The government, the DoD, is thinking: We’re trying to be responsible individuals. We’re trying to apply force much more precisely,” he said. “And the private sector looking at it going: Yeah, but you’re still ultimately about taking human life in the application of force. And that makes us really uncomfortable.”
Earlier this year, Google torpedoed its no-AI-for-weapons rule. Its new set of AI principles doesn’t mention the previous pledge not to use the tech to develop weapons or surveillance tools that violate international norms. This change reflects a broader shift in the tech industry’s stance on the ethical boundaries of AI deployment.
This tension between the corporate world and government end-users remains unquashed.
“We’re not in the same place,” Rogers noted, referring to the Project Maven era. “But we’re going to continue to have these cultural questions between us about what is one entity comfortable with … maybe a different entity has a different view.” ®