Closing the Gap: Trust Doesn’t Get Built During the Crisis
Part 3 of “Closing the Gap: Counterterrorism, AI, and a Crisis of Trust”
The war is in its fourth week. Iran’s missiles have hit Kuwait, Saudi Arabia, Qatar, and the UAE. Over the weekend, Iran struck near Israel’s nuclear research facility at Dimona – the first time in the war. As of this morning, there’s a 48-hour ultimatum on the table: open the Strait of Hormuz or face strikes on Iranian power plants. The Strait carries roughly 20 percent of the world’s oil. And the IAEA said something nobody wanted to hear: even after all these strikes, Iran’s enriched uranium and its ability to make more will likely still be there when the fighting stops.
The hard problem survives the war.
Tomorrow, a federal judge in San Francisco hears Anthropic’s case against the Pentagon. The same company blacklisted as a national security threat has – according to the Wall Street Journal – been helping identify targets in Iran this entire time.
And this week, court filings showed that the Pentagon told Anthropic in writing that the two sides were “nearly aligned” – a full week after Secretary Hegseth declared the relationship dead.
So they were close. Then they blew it up. And now Anthropic is in court while Americans are still in the fight.
That’s the gap. It didn’t have to be this way.
I’ve Seen This Work
There was a country at INTERPOL that had been sitting on fighter data for two years. The data existed. The threat was real. What wasn’t there was enough trust to make sharing feel safe.
One meeting. The right people in the room. Six weeks later, they were sharing every week. Three more countries followed.
Someone went first. It worked.
When I coordinated the 89-nation coalition to defeat ISIS, the hardest days weren’t the operational ones. They were the days when countries with different laws, different intelligence cultures, and different political pressures had to decide whether to hand over things they’d never shared before – financial records, biometric data, source information that could expose how they collected it. The ones that leaned in got results. Not because they trusted everyone at the table. Because they decided the mission mattered more than protecting every edge they had.
I’ve watched this play out enough times to know how it goes. The trust has to be built before the crisis. Once it hits, that window is gone.
What Anthropic Actually Did
Most of the coverage of this standoff has missed what Anthropic actually did.
Anthropic refused two things: building tools for mass surveillance of Americans, and removing human oversight from weapons that kill people. They held those lines knowing it would cost them a major government contract. Then, after they were blacklisted, they kept supporting U.S. forces in Iran anyway – at nominal cost, for as long as necessary.
Some people look at that and see a tech company being difficult. I look at it differently.
The Snowden revelations showed the world that mass surveillance of Americans was happening. The laws designed to prevent it didn’t stop it. So when a company says it won’t build tools that make that easier – and puts real money behind that position – that’s not a company being difficult. That’s a company deciding some things matter more than the bottom line. We used to call that integrity. This industry needs more of it.
The Pentagon’s need for capable AI tools is real and urgent. That’s not the question. The problem is that nobody built the framework for this conversation before the stakes got this high. Anthropic was on classified networks for years before this blew up. The capability was there. The rules for the hardest situations were never written. That’s on both sides.
Here’s what tells you everything about how this actually went: 22 retired U.S. military leaders – former secretaries of the Air Force, Army, and Navy – filed court briefs this week backing Anthropic. They said the blacklisting puts soldiers at risk during active combat operations. These are not people who side with tech companies over the Pentagon out of habit. When they say the designation is being used for retribution rather than security, that’s worth paying attention to.
And the Pentagon’s central argument – that Anthropic could override or disable its own AI during a military operation – was called out as fiction in sworn testimony this week. Anthropic’s Head of Policy, a former National Security Council official who was in the room for the negotiations, said that claim was never raised once during months of talks. It appeared for the first time in the government’s court filings.
What the Gap Is Costing Right Now
The cost doesn’t show up in court filings.
It shows up as a kid getting pulled into something online that nobody caught in time. A network running for months before anyone connected the dots. AI-enabled radicalization is moving faster than the tools built to fight it, partly because those tools aren’t connected to the intelligence that would make them sharp. The people building the tools can’t see what the government knows. The government can’t fully use what those companies built. Both sides are holding a piece of a puzzle they can’t finish.
Iran’s cyber operators – Muddy Water, APT33, the broader proxy network – didn’t stop when the missile launches slowed down. Cyber operations don’t need launchers. And the agency best positioned to defend against them has spent recent months being cut rather than reinforced.
The adversary is not pausing for the lawsuit.
What Actually Closes It
frameworks, the agreements, the systems – those matter. But they come second. What comes first is one person on each side deciding the mission matters more than waiting for permission. I’ve watched that moment happen enough times to know it when I see it. And once someone steps into the gap, the structure has something to hold.
Hegseth’s team told Anthropic they were nearly aligned right up until they weren’t. Amodei kept his people in the fight after they were blacklisted. Two sides that got close enough to touch and still couldn’t hold it.
The tragedy isn’t that the gap exists. It’s that both sides knew how to close it.
After
I’ve watched this play out for three decades. The gap opens. The crisis hits. People look back and say the same thing.
We should have been working together sooner.
Iran’s nuclear capacity survives this war. The hard problems don’t end when the shooting does. They wait.
The gap is a choice. Closing it means stepping into it before anyone tells you to.
I’ve seen what “after” looks like.
We can’t afford it again.
This is Part 3 of “Closing the Gap: Counterterrorism, AI, and a Crisis of Trust.” Parts One and Two are available on this page.
Dexter Ingram is a former Counterterrorism Director who led the 89-nation Global Coalition to Defeat ISIS and directed U.S. efforts to counter violent extremism. He is the founder of IN Network: The National Security Academy. His new book, National Security Careers: The Ultimate Guide to Breaking In – Real Stories, Career Paths, and Insider Lessons, is out now. He is also the author of The Spy Archive, the #1 New Release in Military Intelligence and Spies History (July/August 2025).






