Fire, Ready, Aim. Who Can Tell Friend From Foe?
The Pentagon spent last Friday trying to bully an AI company that had dared to require human oversight of surveillance and autonomous weapons. On Sunday, automated air defense proved exactly why that company was right and oversight matters.
On Friday, February 27, 2026, Defense Secretary Pete Hegseth designated Anthropic — the AI company behind Claude — a “supply chain risk,” a classification meant for foreign adversaries. The company’s offense was simply refusing Pete’s unwanted advances. He demanded they strip naked the safeguards requiring human responsibility for lethal autonomous weapons systems, and prohibiting mass domestic surveillance. Hegseth, infamous for arrogance and betrayal, called it “a master class in arrogance and betrayal.”
Emil Michael, the disgraced ex-Uber executive who was fired for acting like he was God and could control women (appointed Pete’s Undersecretary for Research and Engineering), called Anthropic CEO Dario Amodei “a liar” with a “God-complex” who “wants nothing more than to personally control the US Military.”
The demand made was to allow the AI to be used for “all lawful purposes,” with no exceptions. Anthropic said no because, Amodei argued, it is not reliable enough for autonomous lethal decisions under the law.
We cannot in good conscience accede to their request.
Forty-eight hours later, Pete was putting the conscience of the machine into combat mode.
On Saturday night, Trump started bombing girls schools in Iran, killing hundreds of children. Before the sun came up on Sunday, Kuwaiti air defenses — automated systems designed to identify and engage aerial threats — shot down three U.S. Air Force F-15E Strike Eagles over friendly territory.
Three F-15E shot down in one night. Unbelievable. America abruptly went from Top Gun to bottom.
Nearly $200 million in irreplaceable airframes burning on the ground.
Six aircrew ejecting into the Kuwaiti desert.
The Hegseth automation of war could not tell friend from foe. It had no conscience at all.
The Record That Was
To understand what was lost Sunday morning, you have to understand what the F-15 represented.
The Eagle family held a combat record of 104 aerial victories and zero losses — the most dominant air-to-air record in the history of military aviation.
ZERO losses.
No F-15 had ever been confirmed shot down in air combat. Not by the Syrians. Not by the Iraqis. Not in fifty years of operational service across a half-dozen air forces.
FIFTY years.
During the entire 1991 Gulf War — over 100,000 coalition sorties — the U.S. lost two F-15Es to enemy ground fire. Two jets, across the whole campaign. The F-15C variant accounted for 34 of the 39 American air-to-air kills.
And then along came “no safety” Hegseth.

Operation Epic Fury lost more Strike Eagles to a friend before breakfast on day two than an enemy had managed in the entirety of Desert Storm.
Hegseth blew it. And these aircraft are never coming back. The F-15E production line closed years ago.
Only the newer F-15EX variant is still in production, at $90–94 million per copy, and it is a different airplane.
More to the point, Congress had just allocated $127.46 million specifically to prevent the retirement of the remaining Strike Eagle fleet. It was assumed nobody would be so stupid as to remove the safety from automation and shoot their own F-15E down. That appropriation is now a cruel joke about Hegseth waste, a rounding error in the smoking craters and debris field in Al Jahra.
But this isn’t just about metal. Each F-15E carries a two-person crew — a pilot and a weapons systems officer — who together represent years of specialized training that costs millions more and takes the better part of a decade to produce. The Strike Eagle community is small and aging. The Air Force doesn’t have spare crews sitting on a bench. Losing three jets doesn’t just reduce the fleet by three airframes; it grounds six highly trained operators, disrupts squadron rotations, and degrades the specific deep-strike capability that Epic Fury was designed to employ.
You can’t surge what you don’t have. And the campaign, CENTCOM says, could last weeks. That’s a massive loss. Even bigger disaster than Hegseth’s Red Sea fiasco that handed the Houthis a huge win.
We’ve Seen This Before
The Kuwaiti shootdown is not unprecedented. It is, in fact, a precise recurrence of a failure mode that has been documented, studied, and warned about for over two decades — and then ignored.
This is where it really gets interesting, because not only did Hegseth turn off the “woke” safety systems, he erased the “woke” reasons to keep them on.
During the 2003 invasion of Iraq, U.S. Army Patriot missile batteries — operating in automated or semi-automated modes — caused three separate friendly fire incidents in eleven days. On March 23, a Patriot shot down a British RAF Tornado GR4 returning to base in Kuwait, killing both crew members. The Tornado’s IFF system had suffered a power failure the crew didn’t know about. The Patriot battery, newly arrived and operating without full communications, had about sixty seconds to decide whether an incoming radar return was an Iraqi missile or a friendly jet. The automation decided it was a missile. Two men died.
On April 2, another Patriot battery shot down a U.S. Navy F/A-18C over southern Iraq, killing Lieutenant Nathan White. The system had classified the Hornet as an enemy rocket. White detected the incoming Patriot and tried to evade. He was thirty years old.
In between, an Air Force F-16 pilot — locked up by a Patriot’s fire-control radar — fired a HARM anti-radiation missile that destroyed the Patriot’s sensor dish. An American pilot shooting at an American missile battery in self-defense. Navy pilots flying combat missions over Iraq later said they were more afraid of the Patriots than they were of anything Saddam had.
It took three dead allied airmen before the Army switched the Patriot from automatic to manual engagement.
Stick that in the Pentagon pipe and smoke it when they next attempt to put pressure on Anthropic.
The Defense Science Board studied these incidents and produced a landmark report. The single most damning finding was in the first thirty days of the Iraq invasion, Patriot batteries faced nine ballistic missile attacks compared to 41,000 friendly aircraft sorties.
Read that again.
Nine threats. Forty-one thousand friendlies.
A 4,000-to-1 friendly-to-enemy ratio. The system had been designed to fight incoming missiles. Instead, it was swimming in friendly aircraft it couldn’t reliably distinguish from threats. At that ratio, even a 99.9% accuracy rate produces dozens of misidentifications. The system didn’t need to be stupid to kill allies. It just needed to be automated, fast, and slightly wrong.
The DSB warned that future conflicts “will likely be more stressing.” If the Patriot couldn’t perform in an essentially one-sided air campaign without killing allies, the problem would only get worse.
Brookings revisited this analysis in 2022, identifying the core pathology: automation bias. When an automated system has been built because it presumably outperforms a human operator, but the human is left to “monitor” it, the human tends to trust the machine even when the machine is wrong.
The operators at the Patriot batteries in 2003 were not in a position to question what their sensors were telling them.
The system said “threat.” The system fired.
And in a contested electromagnetic environment — where Iran is actively jamming communications and radar — IFF doesn’t fail cleanly. The system doesn’t distinguish between “confirmed enemy” and “unconfirmed because the handshake was jammed.” It just has a timer. When the timer expires without a valid response, the classification defaults to threat. The machine doesn’t know the difference between “enemy” and “unable to verify.” It only knows the clock ran out.
In December 2024 — just three months ago — the pattern repeated again. The Ticonderoga-class guided missile cruiser USS Gettysburg, operating with the carrier Harry S. Truman in the Red Sea, fired a Standard Missile-2 at what it identified as a Houthi anti-ship cruise missile. It was a U.S. Navy F/A-18F Super Hornet. Two pilots ejected safely. The system had classified a friendly fighter as an enemy weapon.
And now, March 2, 2026.
The most complex battlespace the Gulf has ever seen. Iranian ballistic missiles, cruise missiles, drones, and aircraft all in the sky simultaneously. Kuwaiti air defense operators confronting an air picture of overwhelming density and ambiguity. And the system — whatever system it was — decided three American F-15Es were threats.
Four incidents across twenty-three years.
The same failure mode. The same result. Escalating scale. And the same “woke” lesson the institution is banning from being learned.
The Friday-Sunday Problem
The timeline lays it bare, because the Pentagon’s argument against Anthropic was not subtle.
Requiring human oversight of autonomous lethal systems was declared, without any reason or logic, “fundamentally incompatible with American principles.” It was like declaring brakes incompatible with sports cars. Imagine thinking it arrogant that an AI company saying its own technology is not reliable enough for fully autonomous weapons. That’s the literal opposite of arrogance. Imagine thinking that putting a human in the loop amounted to an ideological veto over military operations. Again, literal opposite. The human oversight would serve to ensure no ideological veto.
Anthropic’s argument was crystal clear: technology makes big mistakes. Automated classification systems produce false positives. In lethal contexts, false positives kill people. Therefore, humans must retain responsibility for use of force.
I have presented and written extensively on this topic for over a decade. with hands on experience breaking AI. Anthropic is absolutely correct.
The Pentagon could have engaged on the merits of AI safety. Instead, Hegseth started throwing political axes and designated Anthropic a supply chain risk. Trump told them to “get their act together” or face “major civil and criminal consequences.”
OpenAI signed a deal within hours — one that multiple analysts immediately flagged as weaker on the exact safeguards Anthropic had been blacklisted for defending. Techdirt’s Mike Masnick noted the agreement’s compliance framework relies on Executive Order 12333, the same legal architecture the NSA uses to collect American communications by tapping lines outside U.S. borders.
And then the war started, and an automated air defense system that could not reliably tell friend from foe destroyed three American jets and nearly killed six American aircrew.
The argument for human oversight is not an abstraction. It is not an ideology. It is not a “God-complex.” It is as old as AI itself. It is the burning wreckage of three F-15E Strike Eagles on the floor of the Kuwaiti desert, put there by a machine that made a predictable error and had no human with the authority, the information, or the time to stop it.
The Accountability Gap
The Nuremberg tribunals established that “I was following orders” is not a defense. The Tokyo tribunals established that command responsibility extends to those who should have known what their subordinates were doing. The doctrine of command responsibility, refined through the Yugoslav and Rwandan tribunals, holds commanders accountable not just for what they order, but for what they fail to prevent when they had the ability and the duty to do so.
Autonomous and semi-autonomous weapons systems blow a huge hole in the human-based framework. When a machine makes a lethal decision and no human authorized that specific act, the chain of accountability doesn’t disappear — it diffuses. And diffusion, in practice, is being interpreted as impunity.
How many extrajudicial assassinations of an innocent person by automation in the Middle East have you heard about, let alone seen attributed to Peter Thiel’s Palantir systems?
There is in fact one person in the chain who cannot escape scrutiny: the commander who declared safety “woke” and placed the system in automatic mode. The person who decided that the machine would classify and engage targets without meaningful human authorization for each individual act of lethal force. That decision — the decision to delegate the kill chain to a sensor and an algorithm — is the act that Nuremberg and Tokyo would recognize.
Not the individual shot. The policy of removing human judgment from the loop.
The commander who flips the switch to “Auto” has made a command decision with lethal consequences, and when the anti-woke machine gets it wrong, the responsibility flows upward to that moment.
And the command climate matters. When the Secretary of Defense publicly designates a company a supply chain risk for insisting on human oversight — when the Pentagon’s own undersecretary calls the CEO of that company “a liar” for maintaining a safety position — that is not merely a contract dispute. It is a signal that propagates through every level of the chain of command: rough speed and slop over verification, automation crashes over hesitation, the machine’s opaque judgment over the operator’s transparent doubt.
Under the doctrine of command responsibility, the question is not just who set the system to automatic. It is who created the conditions under which setting it to automatic seemed like the right call.
This is the investigation CENTCOM just opened. Someone — or some system — classified three F-15Es as threats and authorized engagement.
The questions are simple: Was it a Patriot battery? SHORAD with IR-guided missiles? Was the system in automatic mode? Who authorized that posture? Were Kuwaiti operators briefed on coalition flight plans? Did IFF protocols fail, and if so, why? Was there a human with the authority and the information to override the classification, and if not, who made the decision that there wouldn’t be?
If the investigation finds that the system acted within its programmed parameters — that it correctly followed its rules and still killed three friendly aircraft — then the parameters themselves are the indictment. And the person who set them bears command responsibility for every shot the machine fired.
This is not hypothetical jurisprudence. This is the test case. And it arrives at the precise moment the Pentagon has declared that requiring human oversight of exactly these systems is a radical, woke, supply-chain-threatening position unworthy of engagement.
The Cost of Not Listening
The dollar cost is staggering: roughly $200 million in airframes, plus munitions, plus EPAWSS electronic warfare upgrades if installed, plus recovery operations.
The strategic cost is far worse. The F-15E fleet is already too small. Every airframe matters. Three fewer Strike Eagles means degraded deep-strike capacity for a campaign the Pentagon itself says could run for weeks.
But the real cost is epistemic. The United States government just spent a week declaring that human oversight of autonomous lethal systems is a supply-chain-threatening position because of politics. And then autonomous systems, operating without adequate human oversight, destroyed American military assets worth more than the entire Anthropic contract that was just torn up.
The Anthropic contract was worth $200 million.
The three F-15Es were worth roughly the same.
The Pentagon burned one to make a political point, and the desert burned the other because Hegseth was dead wrong.
Onward and Upward
The questions asked will not actually vary much from after every fratricide incident since 1991. The answers are always some combination of procedural failure, technical malfunction, and automation that operated faster than human judgment could intervene. The corrective action is always some version of “more training, better procedures, improved IFF.” And then it happens again.
The structural question is the one Anthropic raised and the Pentagon refused to engage with: At what point do we acknowledge that automated systems making lethal decisions without meaningful human oversight is a design flaw, not a feature? At what point do we stop treating human judgment as a bottleneck to be engineered out and start treating it as the last line of defense against exactly this kind of catastrophe?
Six American aircrew are alive today because the F-15E has sophisticated manual ejection procedures and seats, not because the automation system worked. The system failed. Completely. The humans survived despite the system, not because of it.
The Pentagon can designate Anthropic whatever it wants. The Kuwaiti desert doesn’t care about supply chain classifications. It just knows that three machines fell out of the sky because another machine couldn’t tell them apart from the enemy, and no human stopped it in time.
That’s not ideology. That’s wreckage.




