Anthropic CEO Mops the Floor With CBS Government Shill

CBS News senior business and technology correspondent Jo Ling Kent landed an exclusive interview with Anthropic CEO Dario Amodei on Friday. It was just hours after Defense Secretary Pete Hegseth designated the American company a “supply chain risk” — a classification previously reserved for Russian cybersecurity firms and Chinese chip suppliers — because Anthropic refused to remove two ethical restrictions from its military AI contract.

The restrictions? No domestic mass surveillance, and no fully autonomous weapons without human oversight. But the facts didn’t matter because CBS totally lost the thread.

Kent had a journalist’s dream: a CEO under unprecedented government pressure, willing to talk on camera, while the government that threatened him was simultaneously launching airstrikes on Iran. The story was the coercion itself. The government was retaliating against an American company for maintaining contract conditions, treating a domestic national security asset as a foreign adversary.

Kent did not cover the story. She instead went into autocratic shill mode, asking repeatedly why won’t this tech company just do what it’s told by Hegseth?

Kent Question Breakdown

Here is how Kent spent her 27 minutes with Amodei, and what each question reveals about whose side of the story she arrived to tell.

Kent Question Actual Function
“Why won’t you release Anthropic’s AI without restrictions to the U.S. government?” Opening question adopts the Pentagon’s frame as the neutral baseline. Unrestricted access is treated as the default; restrictions require justification. A journalist covering government retaliation against a private company might have opened by asking what the government did and why.
“Why do you think that it is better for Anthropic, a private company, to have more say in how AI is used in the military than the Pentagon itself?” Presupposes that vendors should have no conditions on their own products. No reporter asks Lockheed Martin why it thinks it knows better than the Pentagon when it negotiates contract terms. The question treats normal commercial rights as arrogance when exercised on ethical grounds.
“In the name of fundamental principles, why should Americans trust you, the CEO of a private company, to make these decisions instead of the federal government?” Same question repackaged. Note the framing: “instead of the federal government” — as if Anthropic seized control rather than declined to sell unrestricted access. A company choosing not to sell something is not making decisions “instead of” its customer. It is exercising the most basic right of any business.
“Do you think that Anthropic knows better than the Pentagon here?” Third iteration of the same question. At this point it is not journalism, it is a pressure campaign. The repetition functions identically to an interrogation technique: keep restating the government’s position as a question until the subject concedes. But beyond the technique, the substance is absurd. Of course Anthropic knows its own product better than the Pentagon. That is the entire basis of defense procurement. The Pentagon contracts with vendors precisely because it does not have the expertise to build these systems itself. Every defense contractor in existence knows its product better than its customer — that is why the customer is buying instead of building. Boeing knows the F-15’s flight envelope better than the Air Force generals who fly it. Lockheed knows the F-35’s sensor fusion architecture better than the pilots who depend on it. Raytheon knows its missile guidance systems better than the combatant commanders who order strikes with them. The Pentagon’s own acquisition framework is built on the assumption that vendors possess specialized knowledge the government lacks. Asking “do you think you know better than the Pentagon” about your own product is like asking a surgeon if they think they know better than the patient about where to cut. The answer is yes. That is the point.
“Boeing builds aircraft for the U.S. military. Boeing doesn’t tell the U.S. military what to do with that aircraft. How is this any different?” The kill shot — and the one that destroys her credibility. This is factually wrong in every particular. Boeing absolutely tells the military what to do with its aircraft, through thousands of pages of mandatory Technical Orders, flight envelope restrictions, maintenance directives, Service Bulletins, warranty conditions, end-user agreements, technology transfer restrictions, and field service representatives physically present at military installations monitoring compliance. Boeing has fought to prevent unauthorized modifications to its platforms. Boeing maintains proprietary control over avionics source code. The military cannot operate Boeing aircraft without Boeing’s ongoing cooperation and cannot modify them without Boeing’s consent. The premise of the question is false, and it is not a specialist-knowledge kind of false — it is a common-sense kind of false. You cannot sell an $80 million weapons platform and disclaim all responsibility for its use. Everyone who has ever bought a car with a warranty knows this.
“Some of our greatest adversaries have technology that is either quickly catching up to us or will eventually do so… why stay in this position?” The arms-race argument for unconditional compliance. If adversaries are catching up, shouldn’t we abandon all ethical constraints? This is the logic that justified every civil liberties violation in the War on Terror, and Kent deploys it uncritically as if it is a novel insight rather than a discredited framework.
“Do you think Anthropic can survive this as a business?” The quiet threat dressed as concern. Translation: wouldn’t it be easier to just comply? This is the question the government wants the interviewer to ask, because it reframes principled resistance as a business risk rather than a constitutional confrontation.

Not one question in 27 minutes asked how Hegseth could justify the designation.

Not one asked why tools designed for Kaspersky and Chinese chip suppliers were being aimed at an American company.

Not one asked what precedent this sets for any company that tries to maintain conditions on government contracts.

Not one asked whether the Pentagon’s demand for “all lawful purposes” might include uses that are legal but shouldn’t be.

Every question was a variation of why won’t you obey supreme leader?

Batshit Boeing Question

Her Boeing analogy deserves particular attention. Seriously, WTAF came out of this woman’s mouth. 14:50 is the moment the interview moves from bad framing to false testimony.

Kent did not offer Boeing as an offhand comparison. She dropped it like it would be her strongest argument, a rhetorical capstone after three rounds of “who are you to defy the Pentagon.” She presented it as self-evident:

But it’s the exact opposite of what she said. She argued Boeing builds planes and then Boeing doesn’t dictate how they’re used, so Anthropic shouldn’t dictate how its AI is used.

Every god damned element of that is wrong. I can’t believe I have to say this. What the hell is CBS news? Military aircraft come with Technical Orders. They have literally thousands of pages of mandatory operational documentation that specify exactly what the military can and cannot do with that aircraft. Flight envelopes. Maintenance intervals. Structural load limits. Weapons integration parameters. Buyers demand to be told what to do, exactly, and what not to do. Violating those TOs can void warranties, void sustainment contracts, and ground entire fleets. Boeing issues Service Bulletins that effectively mandate modifications. Boeing’s field service representatives are embedded at military installations. Boeing controls proprietary source code in avionics and weapons systems. Boeing negotiates end-user agreements restricting technology transfer and third-party access. Boeing has actively fought to prevent unauthorized reverse-engineering of its platforms.

Boeing doesn’t just “tell the military what to do with that aircraft.” Boeing contractually dictates the boundaries of how its aircraft can be operated, maintained, modified, and transferred.

THAT is the normal vendor-customer relationship in defense procurement. Every defense contractor does this. It is the baseline, the norm.

Kent’s claim is deranged. It’s unmoored. It is a factual assertion that is demonstrably, horribly false. And it is not the kind of false that requires expertise to catch — it is the kind of false that falls apart the moment you think about it for five seconds.

You sell someone a complex machine, you include instructions, restrictions, and conditions. You have a page of DO NOT and you paint the machine with DO NOT symbols. That is how commerce works. That is how it has always worked. The idea that Boeing delivers an F-15E and then shrugs about what happens next is a fantasy, and not a sophisticated one. Getting this so wrong is why the three of them were just shot down in one night.

I didn’t care whether Kent didn’t know this or didn’t care. Neither answer is acceptable for someone in her position.

If she didn’t know, she failed to do basic preparation for the most consequential tech-policy interview of the year. If she did know and said it anyway because it served her rhetorical purpose, she sacrificed accuracy to pressure her subject into conceding the government’s frame.

Accountability

What makes this worse, so significantly worse, is Jo Ling Kent’s own background.

Kent holds two master’s degrees in international affairs. She was a Fulbright scholar studying women’s access to legal aid in China. In 2011, while working as a field producer during the Chinese pro-democracy protests in Beijing, she and a colleague were detained by police in Wangfujing for half an hour. In 2020, she was hit on-air by a Seattle police flash-bang grenade while covering George Floyd protests.

She has personally experienced state coercion against journalists and citizens. She has the academic training to recognize when governments weaponize administrative classifications for political compliance. She has the theoretical vocabulary for exactly the kind of analysis to ask the CEO real questions.

The same structural pattern appeared in front of her, with a government using an extraordinary administrative designation to punish a private entity for refusing unconditional compliance. And she adopts the government’s frame and burns 27 minutes asking the target of that coercion why he wouldn’t give in.

She apparently can identify coercion when it’s Chinese police detaining her in Beijing. She apparently can NOT identify it when it’s Hegseth designating an American company a supply chain risk for maintaining safety in two contract conditions that cover 1% of use cases.

Same structure. Different flag. Total blindness.

Call It Out

This was not an interview.

It was state propagandas dressed in the aesthetics of journalism.

The “CBS News Exclusive” label, the professional lighting, the tough-sounding questions, all performed the form of accountability while executing its opposite. Every question Kent asked could have been drafted by Hegseth’s communications staff. The government never needed to be in the room. It had a correspondent doing its work for free.

Amodei, to his credit, kept redirecting to the actual facts: the restrictions cover 1-2% of use cases, no one on the ground has run into them, the supply chain designation is normally reserved for foreign adversaries, and the government’s own proposed compromise language was designed to concede nothing.

Kent pathetically kept redirecting back to obedience: why don’t you obey?

The interview aired as the United States was launching strikes on Iran, which is exactly the kind of moment when the question of whether AI should power autonomous weapons and mass surveillance systems isn’t theoretical.

Kent had the most important technology-policy story of the decade sitting across from her. She used it to ask why a company wouldn’t surrender its contract rights to a government that was retaliating against it for having contract rights.

The Best Ad in Silicon Valley History

The irony is that Kent’s failure as a journalist produced the most effective corporate branding exercise the technology industry has ever seen.

Apple’s 1984 Super Bowl ad, where an actress throws a hammer at a giant screen representing obedient conformity, is considered the gold standard of tech marketing. It was brilliant. It was also fiction. Nobody at Apple risked anything when it aired. It was a manufactured metaphor, shot by Ridley Scott, approved by a boardroom.

Amodei just did the real version.

On camera, under actual government threat, with actual revenue on the line, hours before actual bombs started falling on Iran, he said no. And Kent helpfully played the role of the conformity screen by spending 27 minutes asking him why he wouldn’t just submit.

Every time she pushed the government’s frame, Amodei calmly restated the principle. She made him look better with every question she asked. You cannot buy that contrast. You cannot manufacture it. You can only earn it by actually being willing to take the hit.

The Pentagon designating Anthropic a “supply chain risk” is the most valuable brand positioning any AI company has ever received. Every enterprise customer, every developer, every privacy-conscious organization just watched a company absorb a blow from the most powerful military on earth rather than remove two restrictions on mass surveillance and autonomous weapons.

They just became the most trusted brand in America, because of this, despite their many mistakes.

It is not a trust-us-we’re-ethical blog post. It is proof of concept on values, stress-tested in public, with the receipts on live television.

And the kicker that nobody in the industry can avoid thinking about: if Anthropic is the company that got blacklisted for refusing to drop ethical guardrails, what does that make the companies that didn’t get blacklisted?

Nobody has to say it. The silence says it for them.

The full transcript is available at CBS News.

Read it and then watch this.

Count the questions that interrogate the government’s behavior. The number is zero. Then count the questions Kent accidentally answered about her own profession. That number is… high.

CO Tesla Kills One in “Veered” Crash Into Pole

The tragic news from Colorado reads like almost every other Tesla Autopilot “veered” crash I’ve written about here for years.

The robot goes straight instead of with the curve, crosses the lane, hits a guardrail, goes over the embankment and into a tree and/or pole where it bursts into fire. High-energy impact means the poorly designed batteries likely ruptured from a strike or even the roll.

We know from various news reports that a 23-year-old was alone in the Tesla at 3 AM on a road that’s been perfectly straight for miles. He’s asleep, drowsy or distracted. When the road curves right around the reservoir, either the system fails to track (highly plausible at 3 AM with limited lane markings on a dark rural section) or the driver makes a slight corrective input that triggers Autosteer disengagement.

Notably, NHTSA has reported in EA22002 that Autopilot presented resistance when drivers attempted manual steering inputs, and attempts to adjust steering resulted in Autosteer deactivating, which is a design that discourages driver involvement.

To put it another way, the map and timing provide almost exact ingredients for what regulators have documented as a Tesla failure pattern. Given analysis of 467 crashes, they found 111 were roadway departures where Autosteer was inadvertently disengaged by driver inputs. The Tesla almost immediately departs its lane after losing lane centering, often resulting in a single vehicle roadway departure crash, with almost all incidents occurring less than 5 seconds after Autopilot was disengaged.

A 3 AM driver engaging within 5 seconds when their Tesla didn’t even slow down for a curve, let alone see the guardrail? Nope.

It makes sense when you look at the map. Baseline Road runs dead straight east-west for miles following the 40th parallel. It’s one of the most geometrically predictable roads in Boulder County. Then right at the reservoir, the road takes a distinct curve to route around the water.

Source: Google Maps

CSP says speed was a factor. What they haven’t said is whether Autopilot was engaged. Will anyone investigate properly?

The agency that’s supposed to pull the telemetry and determine whether Autopilot was active is being hollowed out by the CEO of the company that it regulates. It likely will be buried incorrectly as “speed” and never properly studied or counted among ADAS failures.

For what it’s worth, the story writes itself. A road designed to feel fast and straight that suddenly throws a curve at you next to a reservoir at 3 AM is engineered for exactly this failure. A driver assistance system built for straights is going to choke on that curve in the dark, leaving its driver dead instead of assisted.

Hegseth Bans AI Safety, Three F-15E Shot Down by Friendly Fire

Fire, Ready, Aim. Who Can Tell Friend From Foe?

The Pentagon spent last Friday trying to bully an AI company that had dared to require human oversight of surveillance and autonomous weapons. On Sunday, automated air defense proved exactly why that company was right and oversight matters.

On Friday, February 27, 2026, Defense Secretary Pete Hegseth designated Anthropic — the AI company behind Claude — a “supply chain risk,” a classification meant for foreign adversaries. The company’s offense was simply refusing Pete’s unwanted advances. He demanded they strip naked the safeguards requiring human responsibility for lethal autonomous weapons systems, and prohibiting mass domestic surveillance. Hegseth, infamous for arrogance and betrayal, called it “a master class in arrogance and betrayal.”

Emil Michael, the disgraced ex-Uber executive who was fired for acting like he was God and could control women (appointed Pete’s Undersecretary for Research and Engineering), called Anthropic CEO Dario Amodei “a liar” with a “God-complex” who “wants nothing more than to personally control the US Military.”

The demand made was to allow the AI to be used for “all lawful purposes,” with no exceptions. Anthropic said no because, Amodei argued, it is not reliable enough for autonomous lethal decisions under the law.

We cannot in good conscience accede to their request.

Forty-eight hours later, Pete was putting the conscience of the machine into combat mode.

On Saturday night, Trump started bombing girls schools in Iran, killing hundreds of children. Before the sun came up on Sunday, Kuwaiti air defenses — automated systems designed to identify and engage aerial threats — shot down three U.S. Air Force F-15E Strike Eagles over friendly territory.

Three F-15E shot down in one night. Unbelievable. America abruptly went from Top Gun to bottom.

Nearly $200 million in irreplaceable airframes burning on the ground.

Six aircrew ejecting into the Kuwaiti desert.

The Hegseth automation of war could not tell friend from foe. It had no conscience at all.

The Record That Was

To understand what was lost Sunday morning, you have to understand what the F-15 represented.

The Eagle family held a combat record of 104 aerial victories and zero losses — the most dominant air-to-air record in the history of military aviation.

ZERO losses.

No F-15 had ever been confirmed shot down in air combat. Not by the Syrians. Not by the Iraqis. Not in fifty years of operational service across a half-dozen air forces.

FIFTY years.

During the entire 1991 Gulf War — over 100,000 coalition sorties — the U.S. lost two F-15Es to enemy ground fire. Two jets, across the whole campaign. The F-15C variant accounted for 34 of the 39 American air-to-air kills.

And then along came “no safety” Hegseth.

Operation Epic Fury lost more Strike Eagles to a friend before breakfast on day two than an enemy had managed in the entirety of Desert Storm.

Hegseth blew it. And these aircraft are never coming back. The F-15E production line closed years ago.

Only the newer F-15EX variant is still in production, at $90–94 million per copy, and it is a different airplane.

More to the point, Congress had just allocated $127.46 million specifically to prevent the retirement of the remaining Strike Eagle fleet. It was assumed nobody would be so stupid as to remove the safety from automation and shoot their own F-15E down. That appropriation is now a cruel joke about Hegseth waste, a rounding error in the smoking craters and debris field in Al Jahra.

But this isn’t just about metal. Each F-15E carries a two-person crew — a pilot and a weapons systems officer — who together represent years of specialized training that costs millions more and takes the better part of a decade to produce. The Strike Eagle community is small and aging. The Air Force doesn’t have spare crews sitting on a bench. Losing three jets doesn’t just reduce the fleet by three airframes; it grounds six highly trained operators, disrupts squadron rotations, and degrades the specific deep-strike capability that Epic Fury was designed to employ.

You can’t surge what you don’t have. And the campaign, CENTCOM says, could last weeks. That’s a massive loss. Even bigger disaster than Hegseth’s Red Sea fiasco that handed the Houthis a huge win.

We’ve Seen This Before

The Kuwaiti shootdown is not unprecedented. It is, in fact, a precise recurrence of a failure mode that has been documented, studied, and warned about for over two decades — and then ignored.

This is where it really gets interesting, because not only did Hegseth turn off the “woke” safety systems, he erased the “woke” reasons to keep them on.

During the 2003 invasion of Iraq, U.S. Army Patriot missile batteries — operating in automated or semi-automated modes — caused three separate friendly fire incidents in eleven days. On March 23, a Patriot shot down a British RAF Tornado GR4 returning to base in Kuwait, killing both crew members. The Tornado’s IFF system had suffered a power failure the crew didn’t know about. The Patriot battery, newly arrived and operating without full communications, had about sixty seconds to decide whether an incoming radar return was an Iraqi missile or a friendly jet. The automation decided it was a missile. Two men died.

On April 2, another Patriot battery shot down a U.S. Navy F/A-18C over southern Iraq, killing Lieutenant Nathan White. The system had classified the Hornet as an enemy rocket. White detected the incoming Patriot and tried to evade. He was thirty years old.

In between, an Air Force F-16 pilot — locked up by a Patriot’s fire-control radar — fired a HARM anti-radiation missile that destroyed the Patriot’s sensor dish. An American pilot shooting at an American missile battery in self-defense. Navy pilots flying combat missions over Iraq later said they were more afraid of the Patriots than they were of anything Saddam had.

It took three dead allied airmen before the Army switched the Patriot from automatic to manual engagement.

Stick that in the Pentagon pipe and smoke it when they next attempt to put pressure on Anthropic.

The Defense Science Board studied these incidents and produced a landmark report. The single most damning finding was in the first thirty days of the Iraq invasion, Patriot batteries faced nine ballistic missile attacks compared to 41,000 friendly aircraft sorties.

Read that again.

Nine threats. Forty-one thousand friendlies.

A 4,000-to-1 friendly-to-enemy ratio. The system had been designed to fight incoming missiles. Instead, it was swimming in friendly aircraft it couldn’t reliably distinguish from threats. At that ratio, even a 99.9% accuracy rate produces dozens of misidentifications. The system didn’t need to be stupid to kill allies. It just needed to be automated, fast, and slightly wrong.

The DSB warned that future conflicts “will likely be more stressing.” If the Patriot couldn’t perform in an essentially one-sided air campaign without killing allies, the problem would only get worse.

Brookings revisited this analysis in 2022, identifying the core pathology: automation bias. When an automated system has been built because it presumably outperforms a human operator, but the human is left to “monitor” it, the human tends to trust the machine even when the machine is wrong.

The operators at the Patriot batteries in 2003 were not in a position to question what their sensors were telling them.

The system said “threat.” The system fired.

And in a contested electromagnetic environment — where Iran is actively jamming communications and radar — IFF doesn’t fail cleanly. The system doesn’t distinguish between “confirmed enemy” and “unconfirmed because the handshake was jammed.” It just has a timer. When the timer expires without a valid response, the classification defaults to threat. The machine doesn’t know the difference between “enemy” and “unable to verify.” It only knows the clock ran out.

In December 2024 — just three months ago — the pattern repeated again. The Ticonderoga-class guided missile cruiser USS Gettysburg, operating with the carrier Harry S. Truman in the Red Sea, fired a Standard Missile-2 at what it identified as a Houthi anti-ship cruise missile. It was a U.S. Navy F/A-18F Super Hornet. Two pilots ejected safely. The system had classified a friendly fighter as an enemy weapon.

And now, March 2, 2026.

The most complex battlespace the Gulf has ever seen. Iranian ballistic missiles, cruise missiles, drones, and aircraft all in the sky simultaneously. Kuwaiti air defense operators confronting an air picture of overwhelming density and ambiguity. And the system — whatever system it was — decided three American F-15Es were threats.

Four incidents across twenty-three years.

The same failure mode. The same result. Escalating scale. And the same “woke” lesson the institution is banning from being learned.

The Friday-Sunday Problem

The timeline lays it bare, because the Pentagon’s argument against Anthropic was not subtle.

Requiring human oversight of autonomous lethal systems was declared, without any reason or logic, “fundamentally incompatible with American principles.” It was like declaring brakes incompatible with sports cars. Imagine thinking it arrogant that an AI company saying its own technology is not reliable enough for fully autonomous weapons. That’s the literal opposite of arrogance. Imagine thinking that putting a human in the loop amounted to an ideological veto over military operations. Again, literal opposite. The human oversight would serve to ensure no ideological veto.

Anthropic’s argument was crystal clear: technology makes big mistakes. Automated classification systems produce false positives. In lethal contexts, false positives kill people. Therefore, humans must retain responsibility for use of force.

I have presented and written extensively on this topic for over a decade. with hands on experience breaking AI. Anthropic is absolutely correct.

The Pentagon could have engaged on the merits of AI safety. Instead, Hegseth started throwing political axes and designated Anthropic a supply chain risk. Trump told them to “get their act together” or face “major civil and criminal consequences.”

OpenAI signed a deal within hours — one that multiple analysts immediately flagged as weaker on the exact safeguards Anthropic had been blacklisted for defending. Techdirt’s Mike Masnick noted the agreement’s compliance framework relies on Executive Order 12333, the same legal architecture the NSA uses to collect American communications by tapping lines outside U.S. borders.

And then the war started, and an automated air defense system that could not reliably tell friend from foe destroyed three American jets and nearly killed six American aircrew.

The argument for human oversight is not an abstraction. It is not an ideology. It is not a “God-complex.” It is as old as AI itself. It is the burning wreckage of three F-15E Strike Eagles on the floor of the Kuwaiti desert, put there by a machine that made a predictable error and had no human with the authority, the information, or the time to stop it.

The Accountability Gap

The Nuremberg tribunals established that “I was following orders” is not a defense. The Tokyo tribunals established that command responsibility extends to those who should have known what their subordinates were doing. The doctrine of command responsibility, refined through the Yugoslav and Rwandan tribunals, holds commanders accountable not just for what they order, but for what they fail to prevent when they had the ability and the duty to do so.

Autonomous and semi-autonomous weapons systems blow a huge hole in the human-based framework. When a machine makes a lethal decision and no human authorized that specific act, the chain of accountability doesn’t disappear — it diffuses. And diffusion, in practice, is being interpreted as impunity.

How many extrajudicial assassinations of an innocent person by automation in the Middle East have you heard about, let alone seen attributed to Peter Thiel’s Palantir systems?

There is in fact one person in the chain who cannot escape scrutiny: the commander who declared safety “woke” and placed the system in automatic mode. The person who decided that the machine would classify and engage targets without meaningful human authorization for each individual act of lethal force. That decision — the decision to delegate the kill chain to a sensor and an algorithm — is the act that Nuremberg and Tokyo would recognize.

Not the individual shot. The policy of removing human judgment from the loop.

The commander who flips the switch to “Auto” has made a command decision with lethal consequences, and when the anti-woke machine gets it wrong, the responsibility flows upward to that moment.

And the command climate matters. When the Secretary of Defense publicly designates a company a supply chain risk for insisting on human oversight — when the Pentagon’s own undersecretary calls the CEO of that company “a liar” for maintaining a safety position — that is not merely a contract dispute. It is a signal that propagates through every level of the chain of command: rough speed and slop over verification, automation crashes over hesitation, the machine’s opaque judgment over the operator’s transparent doubt.

Under the doctrine of command responsibility, the question is not just who set the system to automatic. It is who created the conditions under which setting it to automatic seemed like the right call.

This is the investigation CENTCOM just opened. Someone — or some system — classified three F-15Es as threats and authorized engagement.

The questions are simple: Was it a Patriot battery? SHORAD with IR-guided missiles? Was the system in automatic mode? Who authorized that posture? Were Kuwaiti operators briefed on coalition flight plans? Did IFF protocols fail, and if so, why? Was there a human with the authority and the information to override the classification, and if not, who made the decision that there wouldn’t be?

If the investigation finds that the system acted within its programmed parameters — that it correctly followed its rules and still killed three friendly aircraft — then the parameters themselves are the indictment. And the person who set them bears command responsibility for every shot the machine fired.

This is not hypothetical jurisprudence. This is the test case. And it arrives at the precise moment the Pentagon has declared that requiring human oversight of exactly these systems is a radical, woke, supply-chain-threatening position unworthy of engagement.

The Cost of Not Listening

The dollar cost is staggering: roughly $200 million in airframes, plus munitions, plus EPAWSS electronic warfare upgrades if installed, plus recovery operations.

The strategic cost is far worse. The F-15E fleet is already too small. Every airframe matters. Three fewer Strike Eagles means degraded deep-strike capacity for a campaign the Pentagon itself says could run for weeks.

But the real cost is epistemic. The United States government just spent a week declaring that human oversight of autonomous lethal systems is a supply-chain-threatening position because of politics. And then autonomous systems, operating without adequate human oversight, destroyed American military assets worth more than the entire Anthropic contract that was just torn up.

The Anthropic contract was worth $200 million.

The three F-15Es were worth roughly the same.

The Pentagon burned one to make a political point, and the desert burned the other because Hegseth was dead wrong.

Onward and Upward

The questions asked will not actually vary much from after every fratricide incident since 1991. The answers are always some combination of procedural failure, technical malfunction, and automation that operated faster than human judgment could intervene. The corrective action is always some version of “more training, better procedures, improved IFF.” And then it happens again.

The structural question is the one Anthropic raised and the Pentagon refused to engage with: At what point do we acknowledge that automated systems making lethal decisions without meaningful human oversight is a design flaw, not a feature? At what point do we stop treating human judgment as a bottleneck to be engineered out and start treating it as the last line of defense against exactly this kind of catastrophe?

Six American aircrew are alive today because the F-15E has sophisticated manual ejection procedures and seats, not because the automation system worked. The system failed. Completely. The humans survived despite the system, not because of it.

The Pentagon can designate Anthropic whatever it wants. The Kuwaiti desert doesn’t care about supply chain classifications. It just knows that three machines fell out of the sky because another machine couldn’t tell them apart from the enemy, and no human stopped it in time.

That’s not ideology. That’s wreckage.

White Nationalist Now Runs Pentagon Prayer Services

Pete Hegseth didn’t invite a chaplain to the Pentagon. He invited Doug Wilson, a white Christian nationalist from Moscow, Idaho, who runs nearly 500 schools he calls “munitions factories” and describes his students as “foot soldiers.”

The “munition” and “soldier” language sounds like war for a reason. Wilson means it operationally.

Munitions factories produce weapons.

Foot soldiers deploy them.

He said this on the record as a warning.

Wilson’s theology is explicit dominion doctrine. As he told CNN:

Every society is theocratic. The only question is who’s Theo.

Democracy is a competing theology to defeat. Christ replaces Demos. The congregation replaces the electorate. The prayer meeting is a briefing.

As PRRI’s new data on Christian nationalism shows, the correlation between Trump favorability and Christian nationalist ideology is r=0.80. The ideology and the political machinery are the same thing measured two ways.

The reason we need to talk about Wilson at the Pentagon is because he is what that number looks like when it is invited inside to take control.

Structural Political Violence

Wilson’s role is authorization. The theological framework he’s installing at the Pentagon transforms every future domestic deployment from an act of state violence into an act of divine obedience. Troops exercise dominion. Cities get reclaimed. The language of occupation becomes the language of faith.

This is exactly how the permission structure works. Christian nationalism doesn’t just correlate with support for political violence — it provides the moral architecture that makes violence feel righteous. Thirty percent of Christian nationalism adherents supported political violence under Biden. When Trump won, support for violence against the state dropped because the state became their instrument of violence. Wilson at the Pentagon is the next step: consecrating that instrument to crusade against opposition.

Hierarchy Is the Point

Wilson’s positions aren’t presented by him as fringe opinions bolted onto mainstream theology. The stuff nobody says anymore is for him the entire infrastructure.

That’s why, like Peter Thiel, he says women don’t count. He calls for a repeal of the 19th Amendment on principle, not as a priority. Women in his world submit to husbands. Households get only one vote, and it’s cast only by the man. His 1999 book describes male sexuality as a method of conquest and colonization of women, in very clear terms that aren’t metaphorical.

That’s why he promotes slavery. He’ll call mass systemic rape of Black women to sell their children “unbiblical” while claiming it produced “genuine affection between the races.” He says the white men who raped Black women to sell their children were “decent human beings.” The topic is framed as a template to reinstate, not a history lesson. Hierarchy gets advocated as natural. Authority by race is called divine. Obedience is how he describes love.

Install white supremacist hate, formerly considered domestic terrorism, inside the control rooms at the Pentagon and then what?

Command is about to be defined as sacred and submission as virtue. The underlying question becomes what orders such a thoughtless command system will be asked to justify.

Hegseth, Worship and Aryan Nations

Wilson opened a branch of Christ Church in Washington in a building owned by Mark Meadows’s think tank. Hegseth and his family are his regular worshippers.

That’s external yet direct institutional capture of the Pentagon. A preacher from a town whose name should ring alarms for anyone who remembers the Aryan Nations compound at Hayden Lake now runs prayer services for the infamously tattooed Secretary of Defense.

Source: Twitter

Wilson’s infrastructure extends far beyond one church. A publishing house. Streaming shows. Nearly 500 schools coast to coast. He told NPR he sees his educational enterprises as munitions factories. He’s telling you exactly what he’s building, and Hegseth just gave him a key to the building where the actual munitions are.

Repeating Worst History on Purpose

Ludwig Müller was a military chaplain at the Königsberg garrison when the Nazis rose to power in 1933. He had already co-founded the Deutsche Christen, a “positive Christianity” movement fusing theology with racial nationalism. Hitler elevated him to Reich Bishop, tasked with consolidating 28 Protestant churches into a single institution under state ideological control.

His job was Gleichschaltung: making the theological infrastructure serve the political machinery. He rewrote the Sermon on the Mount to eliminate whatever he deemed “meek.” His movement had already declared during WWI that “pacifism is blasphemy against God” — the Reich Church made it policy.

In 2026, Hegseth installed Doug Wilson at the Pentagon.

1933 2026
Theologian in the military Ludwig Müller, military chaplain, appointed Reich Bishop by Hitler Doug Wilson, dominion theologian, leads Pentagon prayer for Hegseth
Movement Deutsche Christen — Christianity fused with racial nationalism Christian nationalism — dominion theology fused with white evangelical identity
Infrastructure 28 churches consolidated into one Reich Church Nearly 500 schools, publishing house, streaming shows, D.C. church
Racial doctrine Aryan Christianity, Jewish elements purged from scripture Slavery apologia, repeal of women’s suffrage, criminalization of homosexuality
Language Sermon on the Mount rewritten; “pacifism is blasphemy” “Munitions factories,” “foot soldiers,” “every society is theocratic”
Violence SA deployed before power, then channeled through state 30% backed political violence under Biden, support dropped when Trump won
Christian resistance Confessing Church, Barmen Declaration, Bonhoeffer — arrested, executed Rep. James Talarico — CBS preemptively complied with FCC pressure to suppress his interview

Historian Doris Bergen spent thirty years researching the thousand Wehrmacht chaplains who served the Nazi regime. Her conclusion, published in Between God and Hitler:

In the Nazi empire, Christianity and Christian chaplains were essential components in a system of ideas, structures, and narratives that protected and rewarded the perpetrators of genocide and their communities even as it erased their victims and denied their crimes.

Her central question asking “whom or what does a chaplain serve” is the one Wilson already answered for us on camera.

He knows exactly whom he serves. So does Hegseth.

Pentagon of Theocracy

In 1934, actual Christians responded. The Confessing Church issued the Barmen Declaration, drafted by Karl Barth:

We reject the false doctrine, as if the church could place the Word and work of the Lord in the service of any arbitrarily chosen desires, purposes, and plans.

Hundreds of pastors were arrested. Dietrich Bonhoeffer, who warned that the church must “not just bandage the victims under the wheel, but put a spoke in the wheel itself,” was executed at Flossenbürg in April 1945.

He was 39.

In 2026, Rep. James Talarico, a Presbyterian seminarian running for Senate in Texas, tried to make the same argument on national television. CBS lawyers preemptively blocked the interview from broadcast, citing FCC guidance that the Trump administration had rewritten in January to strip talk shows of their longstanding news exemption. The network then denied it had censored him. Colbert aired the interview on YouTube instead. It got 7.3 million views. Talarico raised $2.5 million in 24 hours.

The suppression didn’t work the way Bonhoeffer’s arrest worked. But the mechanism is the one this post is about: institutional compliance dressed as procedural caution. CBS performed the chilling effect voluntarily. That’s how Gleichschaltung scales — you don’t need to arrest everyone if the institutions censor themselves.

The theological authorization chain is now installed at the Pentagon. A man who describes civic life as theocratic conquest is praying over the people who command the military. A Secretary of Defense who treats his position as a culture war deployment is receiving spiritual counsel from someone who produces “foot soldiers” and builds “munitions factories.”

When Hegseth pushes troops to American cities, the justification will be theological. Militant dominion on the whims of Trump. Spiritual warfare as public policy.

Doug Wilson spent decades overtly espousing exactly this domestic terror framework. Hegseth just flipped it from national security threat to national security capture.

Nazi Gleichschaltung was the same.