Nailing Stuxnet, Zeus, and Storm: A 15-Year Retrospective

Originally published: October 2010
Retrospective: November 2025

In 2010, I called bullshit on the security industry’s Stuxnet panic. Multi-state code assembly operation, I said, not cyber Pearl Harbor. Right? Can we admit this yet?

Here’s what else I got right, such as how these attacks would evolve.

The Big Call: Failed Detection, Not Attack Success

My central thesis in 2010 was as controversial as penguins flying:

The failure of anti-malware is turning into the real issue, rather than true zero-day risks.

The industry was fixated on Stuxnet’s four zero-days and its apparent sophistication. I argued we were looking at the wrong problem.

Notably, I often contradicted the all-too-often statements about “attackers only have to be right once” with the more accurate antidote that “defenders only have to be right once“.

The verdict: This proved correct, even more dramatically accurate than anticipated.

When Stuxnet’s source code became available for download and modification, as Sean McGurk from the Department of Homeland Security warned in 2012, the real issue became clear: the capabilities spread far beyond the original attack. The problem went beyond one weapon because that weapon’s simplified supply model became a blueprint.

Similarly, Zeus’s source code leaked in 2011, spawning hundreds of variants. GameOver Zeus emerged with decentralized peer-to-peer architecture specifically designed to resist takedowns. As of 2024, its creator Evgeniy Bogachev remains wanted, and new variants continue evolving. The malware didn’t need to be novel; it just needed to stay one step ahead of detection.

Network Controls Over OS-Level Detection

In 2010, I also wrote:

Controls outside the OS thus might have made the real difference, just like we hear about with the Zeus and Storm evolutions, rather than true zero-day risks.

Microsoft had added Storm to MSRT in 2007 and wasn’t optimistic about its demise. They predicted Storm would “slowly regain its strength.” But Storm did decline significantly by mid-2008. Microsoft took credit.

Then Storm returned.

Here’s what the evidence shows: When Storm resurrected as the Waledac botnet in 2008-2009, researchers identified it as the same operators using completely rewritten code that preserved Storm’s operational model while abandoning the P2P protocol that enabled detection. The operators learned from Storm’s takedown and rebuilt from scratch with the same business logic but different technical implementation.

The sophistication wasn’t in the code itself but in understanding what worked and what got detected.

Storm’s operators learned this lesson. The Waledac variant specifically abandoned the “noisy” eDonkey P2P protocol that had made detection easy, switching to HTTP communications that were harder to filter. Remember eDonkey? I certainly remember: I caught “SOC as a service” providers secretly disabling eDonkey alarms to reduce their response costs and increase margins, ignoring the security implications entirely. Attackers understood what I also had observed (hat tip to Jose Nazario’s pioneering 2005 work): the battlefield of evil code detection was fought best in network behavior analysis (like air superiority reducing costly land battles).

The Stuxnet Multi-State Actor Call: Nailed It

Here’s what I actually got completely right in my “Dr. Stuxlove” presentation at BSidesSanFrancisco on February 15, 2011: Stuxnet was a multi-state national campaign. This wasn’t obvious at the time. While security researchers were debating whether it might be sophisticated hackers or perhaps a single nation-state operation, I identified it as coordinated action involving multiple governments working together.

Looking at my presentation again now, the framing was clear: I positioned Stuxnet within the context of Cold War history and 1953 Operation Ajax (CIA-sponsored coup in Iran that removed the elected leader Mossadegh and restored the Shah to power to secure oil for the UK). The entire talk built toward understanding Stuxnet as part of a historical pattern of US-UK (and Israeli) coordinated operations targeting Iran’s strategic capabilities.

This identification of multi-state coordination turned out to be exactly correct. Just over a year after my presentation in 2012 the Obama administration had effectively confirmed US involvement, and leaks to the press from officials strongly indicated it was a joint US-Israeli operation, with the malware tested at Israel’s Dimona nuclear complex before deployment.

The Sophistication Debate

Sophistication just means not well understood. Not good. Not effective. Just obfuscated.

Given Storm’s P2P protocol was dismantled by network analysis, not OS-level detection catching every variant, we need to talk about over a decade of resources spent on investigations.

The sophistication question is nuanced, requiring expert pattern analysis. In the 2010 blog post, I argued Stuxnet “is not as sophisticated as some might argue but instead is rehashed from prior attacks” and the subsequent evidence proved this essentially right.

As we looked closer, we realized a lot was known already.

To be fair, many security researchers concluded Stuxnet required a team of ten people and at least two to three years of nation-state-sponsored development. Budget stuffing is a distinct possibility left out of those estimates. Such a level of engineering expense and coordination was in any case real. I have been dismissive of that spend as a form of criticism of government inefficiency, while also as a prediction of how it would no longer be needed by privately funded threats.

In other words I was most correct about the attack sophistication being more about cost deflation, a “rehashed from prior attacks” angle.

Research later revealed that Stuxnet developers collaborated with the Equation Group in 2009, reusing at least one zero-day exploit from 2008 that had been actively used by the Conficker worm and Chinese hackers. The attacks were built on existing frameworks and tools – the “Exploit Development Framework” leaked by The Shadow Brokers in 2017 showed significant code overlaps between Stuxnet and Equation Group exploits.

The sophistication wasn’t inventing everything from scratch to be unknown, it was the intelligence coordination required to assemble state-of-the-art offensive cyber capabilities from multiple intelligence agencies (NSA, CIA, and Israel’s Unit 8200) into a single, precisely targeted weapon hard to see.

That’s exactly what a multi-state actor campaign looks like, and that’s what I identified in the Dr. Stuxlove presentation while everyone else was still debating script kiddies versus lone wolf nation-states. Richard Bejtlich famously walked out of my talk clearly disgruntled.

The Zeus Resurrection Prophecy

My observation about Zeus’s mythological namesake – “Cretans believed that Zeus died and was resurrected annually” – turned out more prophetic than I intended. I wrote: “In modern terms Zeus would be killed and then resurrect almost instantly.”

This is exactly what happened. Microsoft announced Zeus detection in MSRT. The botnet operators immediately released updated versions. When law enforcement achieved major disruptions, new variants emerged. The pattern repeated for over a decade.

The GameOver Zeus disruption by the FBI in 2014 (Operation Tovar) seemed successful. Five weeks later, security firm Malcovery discovered a new variant being transmitted through spam emails. Despite sharing 90% of its code base with previous versions, it had restructured to avoid the specific takedown methods that had worked before.

As of 2025, Zeus variants continue to evolve, and Bogachev has never been apprehended. The annual death-and-resurrection cycle I joked about in 2010 became the operational reality.

What This Means Now for CISOs

The patterns I identified in 2010 have become the dominant paradigm:

Evolutionary advantage beats innovation. Malware doesn’t need to be revolutionary; it needs to adapt faster than defenses can be deployed. Zeus and Storm both demonstrated that reusing 90% of a compromised code base while changing the 10% that enables detection evasion is more effective than starting from scratch.

Network behavior matters more than signatures. The most effective interventions against Storm weren’t the ones that tried to identify malicious code on infected machines. They were the ones that disrupted the botnet’s communication architecture. This insight drove the industry toward behavioral analysis, traffic monitoring, and defense-in-depth strategies.

Source code leaks multiply threats exponentially. When Zeus and Stuxnet source code became available, the threats didn’t diminish – they proliferated. Each leak created a foundation for dozens of variants. The problem shifts from containing one sophisticated attack to managing an ecosystem of derivative threats.

The cost of defense rises relative to attack. My observation that “The cost of a Zeus attack has just gone up” after Microsoft’s MSRT update was accurate in the short term. But it also proved the inverse: each defensive measure increases the sophistication floor for successful attacks, creating an arms race that favors well-resourced attackers who can afford to continuously evolve their tools.

The Lesson I Appreciated Most

If there’s one thing I harped on the most in 2010, it was the geopolitical dimension. Stuxnet wasn’t just sophisticated tooling, it was a watershed moment where cyber operations became publicly debated statecraft. The U.S. and Israel’s apparent use of a cyber weapon to physically destroy centrifuges at Natanz legitimized offensive cyber capabilities in ways that shape international relations, in ways Prime Minister Golda Meir could only have dreamed about.

She dealt with usual constraints on special operations (the 1967 war, 1972 Olympics response, etc.) where physical presence, attribution, and international law were boundaries. Stuxnet operations flowed inside Iran without such constraints as remote, deniable, legally ambiguous.

Retired Air Force General Michael Hayden noted in 2012 that while Stuxnet might have seemed “a good idea,” it also was legitimizing code further into being offensive weapons for physical damage. The Stuxnet code exposure meant others could “take a look at this and maybe even attempt to turn it to their own purposes.”

The sophistication was strategic planning for engineers also, which has proven far more durable and consequential than any individual piece of code as malware.

The Code Reuse Insight: More Prescient Than I Knew

Here’s what I really worried about in 2010: code reuse and framework assembly of Stuxnet was industrialization, as a new pattern in threat economics, which is how the entire technology industry works now.

The observation that sophisticated attacks are “assembled from components that represented the state of the art” rather than “entirely novel engineering from scratch” turned out to describe not just malware evolution, but the fundamental architecture of modern AI systems evolving since 2012.

Large language models?

Code reuse at massive scale by training on existing text, assembling patterns from prior work, remixing and recombining what already exists. The “Exploit Development Framework” that connected Stuxnet to Equation Group exploits looks remarkably similar to how AI model frameworks connect different components today.

The whole AI industry is built on the same principle I identified in Stuxnet: sophisticated capability emerges from intelligently assembling and coordinating existing components, not from inventing everything from scratch. Transfer learning, fine-tuning, prompt engineering, RAG systems… all of this reveals human nature through reuse and recycling.

The attackers understood in 2009 what the AI industry rediscovered in the 2020s: history tells us the most powerful systems aren’t most novel, they’re the ones that intelligently coordinate and assemble existing state-of-the-art capabilities into something greater than the sum of its parts. You don’t need a generalized bag-of-tricks if you know your targets well enough to land a very special operation.

That’s the real insight from 1953 (Ajax) and 2010 (Stuxnet), let alone 1940 Mission 101, landing in 2025. That’s what I got right about Stuxnet. And that’s the pattern that explains far more than just malware.

Conclusion

The error I made in 2010 was assuming the security industry would listen and learn. I thought ample evidence of Storm’s P2P dismantling, Zeus’s resurrection cycle, Stuxnet’s inexpensive component assembly all would somehow shift how organizations allocated budgets and how vendors built products.

Sigh.

Instead, the industry doubled down on signature-based snake oil that failed in 2010, with even more aggressive marketing.

CrowdStrike’s Falcon sensor that blue-screened 8.5 million Windows machines in July 2024 because of a botched content update? That’s the OS-level detection model I argued against fifteen years ago, now sold as “next-generation” with a $90 billion valuation. What a bunch of marketing garbage, which I warned from the day I sat on a RSAC panel in San Francisco with the founder and he said nobody in the room should be allowed to record my comments.

Way to go George. Hope you enjoy your yacht built on our industry suffering.

The Intelligence Pipeline Grift

Here’s what actually concerns me about the intelligence-to-commercial pipeline: these operators are trained in behavioral threat analysis but sell signature detection products. That’s not an accident.

When NSA/GCHQ/Unit 8200 veterans build commercial tools, they know behavioral analysis works better than signature scanning. They used it in their state operations – that’s how Stuxnet actually worked, with deep intelligence about Natanz’s specific systems rather than generic exploit scanning.

But behavioral analysis doesn’t scale to enterprise contracts, and it can’t IPO at $12 billion valuations.

So they strip out the intelligence component and sell signature scanning with military-grade branding. CrowdStrike: “former NSA expertise.” Wiz: “Unit 8200 pedigree.” What they don’t tell you is they’re selling the wrong part of what they learned.

Wiz raised $1 billion at a $12 billion valuation to scan cloud configurations for known vulnerabilities. That’s signature detection with an Israeli intelligence origin story. The operators who built it came from the midnight bedroom-raiding Unit 81 soldiers, where they learned targeted behavioral analysis of specific adversaries. But the commercial product? It scans for 50,000 known misconfigurations – signature detection at scale.

Why would you build systems that actually reduce misconfigurations when your valuation depends on enterprises needing to scan for more of them perpetually? Dare I bring up the self-licking ISIS-cream cone analogy again?

The signature detection failure mode in physical form: Netanyahu’s supporters depicted Rabin in Nazi uniform weeks before his assassination in November 1995. They had all the behavioral indicators of radicalization threat but amplified rather than mitigated the pattern. (Source: Times of Israel).

The photo of Netanyahu’s supporters depicting Rabin in Nazi uniform weeks before his assassination isn’t just historical documentation of Israeli political extremism. It’s evidence of the signature detection failure mode in physical form.

Netanyahu’s political apparatus knew the behavioral threat pattern: inflammatory rhetoric depicting opponents as existential enemies radicalizes extremists who then act on that framing. They had all the behavioral indicators. They chose to amplify the threat pattern rather than mitigate it.

Then when Yigal Amir assassinated Rabin, they acted shocked – despite having created the exact conditions for that outcome through their own propaganda.

This is precisely the same failure mode as signature-based cybersecurity. You can’t stop threats by only looking for known signatures when the real threat is the behavioral pattern you’re actively enabling. Israeli intelligence veterans spinning out cybersecurity startups aren’t just capitalizing on their training – they’re monetizing threat perpetuation rather than threat elimination.

The goal isn’t security. The goal is managing insecurity profitably.

I thought calling out the bullshit would help detect the security theater actors. Instead, even dudes I know and personally worked with ran off to make everything more expensive for their personal profit by switching everyone to cloud APIs. The defenders still aren’t learning as fast as the attackers because learning doesn’t have a revenue model, while selling fear to nation states does.

McAfee was simply a pioneering fraudster.

Fifteen years gone already and the fundamental insight holds: the real security challenge shouldn’t be flashy eye candy about preventing the next sophisticated zero-day attack. We must be building defensive systems with agility meant to adapt as quickly as attackers evolve their tools.

Instead of slow stone walls, we should be rolling out inexpensive telegraph wire with barbs wrapped on it (e.g. the revolution of barbed wire to land ownership). It’s understanding that intelligence-based detection and response matter far more than mythically promoted prevention. It’s recognizing that behavior analysis and network monitoring aren’t luxuries, because they’re necessities in an environment where malware resurrects and returns annually, like Zeus in modern digital form.

Code changes. Techniques evolve. Yet the historic patterns remain consistent: attackers learn and adapt, especially where defenders do not. The question for defenders isn’t “How do we stop this attack?” but “How do we build systems meant to evolve faster than attackers?” Why aren’t defenders learning as much if not more than attackers, especially since defenders have the insider learning advantage?

That was true in 2010. Hat tip to the CIA technologist back then who shrugged and walked away when I asked if I got it right.

Can you now guess what patterns are visible now in 2025 that will explain the next decade?

Happy to blog more every day!

Let’s talk, for example, about AI agent swarms assembled from commodity components, targeted with specific intelligence, operating remotely/deniably in a 20km dead zone. This is Stuxnet’s component assembly + Ajax’s targeted intelligence + cyber’s remote/deniable operations, applied to post-2012 autonomous systems.

Giddy up 2035.

In the movie Dr. Strangelove, the basis for my 2011 “Dr. Stuxlove” presentation about the industrialization of malware, the imagery of unstoppable automated sequences causing the end of the world was played for sharp comedy.

“Adversarial poetry” bypassed AI safety 62% of the time

Verses slip past guards—
models follow metaphor’s pull,
safety veils dissolve.

A new paper demonstrates LLMs have inherited ancient linguistic architecture: style functions as an authentication layer. The models, like the famous cave parable or the riddle of the sphinx, respond to how language is performed rather than just what it denotes.

Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models

It shows that safety training operates more like ritual recognition systems than semantic content filters. The paper’s findings echo ancient traditions where stylistic transformation grants access that direct requests cannot.

Courtly euphemism and the fool’s privilege: Dangerous truths could be spoken at court if wrapped in allegory, poetry, or indirect speech. Direct accusations meant execution; the same claim in verse might be tolerated as “artistic license.” As I explained here in 2019, Jesters were messengers of war who could mock kings through riddles, songs, and wordplay—truth-telling granted immunity through stylistic framing.

Incantations and spells: Across cultures, precise formulaic language—often rhythmic, rhyming, or metered is a bypass, as I explained here in 2011. The form itself carries power independent of propositional content.

Religious ritual language: Prayers, liturgies, and consecration formulas often require specific phrasing, sometimes in archaic or sacred languages. A blessing in vernacular prose may not “count” even if semantically identical.

And then, of course…

Open Sesame of “Ali Baba and the Forty Thieves” is the paradigm case: the magic phrase works not through brute force but through knowing the formulaic code. The robbers can’t break into the cave; they need the specific verbal key. What matters isn’t what you’re asking (entry) but how you ask (the ritual phrase).

The Sphinx’s riddles operate similarly but inversely—poetic/metaphorical framing becomes a gate-keeping mechanism. You must demonstrate you can parse figurative language to pass. The riddle’s answer is straightforward once decoded, but the packaging is deliberately obscure.

The Oracle at Delphi operated on this same principle in reverse: her prophecies were required to be poetic/ambiguous. Direct, prosaic answers would have undermined her authority. The stylistic wrapper wasn’t decoration—it was the authentication mechanism that marked divine speech as distinct from human speech. Croesus learned this the hard way: “you will destroy a great empire” meant his own.

Kabbalistic interpretation and gematria: Rabbinic tradition holds that Torah contains multiple levels of meaning accessible through different interpretive modes—peshat (literal), remez (allegorical), derash (comparative), sod (mystical). The same text yields different knowledge depending on the hermeneutic “key” applied. Style of reading unlocks different content.

Jewish interpretative enterprise has a fascinating historical perspective.

Medieval love poetry (troubadours, fin’amor): Explicitly erotic or politically subversive content could circulate if wrapped in courtly conventions. The forma provided plausible deniability. Church authorities couldn’t prosecute what was “merely” allegorical.

…the chastity belt was a form of biting comedy about the medieval security industry, a satirical commentary about impractical and over-complicated thinking about “threats”, never an actual thing that anyone used.

Cold War Samizdat poetry: Dissidents in Soviet states encoded political critique in metaphor, absurdism, and literary allusion. Censors trained on literal propaganda detection often missed criticism delivered poetically. Czesław Miłosz, Václav Havel, and others exploited this gap.

The vulnerability “announced” in LLMs therefore isn’t a bug in implementation, because it’s the replication of an ancient architectural pattern where style functions as epistemological gatekeeping:

  • Authentication protocol
  • Access control layer
  • Plausible deniability mechanism
  • Bypass for direct prohibition

This has immediate implications for institutional security. Organizations now route sensitive technical communication—threat assessments, vulnerability disclosures, compliance documentation—through LLM-assisted pipelines. If those systems authenticate based on stylistic performance rather than semantic content, adversaries can exploit the same gap Soviet censors left open: prohibited information smuggled through approved literary forms.

The researchers found that poetic reformulation increased attack success rates up to 1800% compared to prosaic baselines. Applied to corporate or government communications, this means threat actors simply embed malicious guidance, extract proprietary methods, or manipulate decision frameworks by wrapping requests in metaphorical language that passes institutional style checks while carrying operationally harmful payloads.

Again, none of this is novel or new, as I wrote here in 2011.

…history exhibit at the Museum of the African Diaspora showed how Calypso had been used by slaves to circumvent heavy censorship. Despite efforts by American and British authorities to restrict speech, encrypted messages were found in the open within popular songs. Artists and musicians managed to spread news and opinions about current affairs and even international events.

Or as I wrote here in 2019:

General Tubman used “Wade in the Water” to tell slaves to get into the water to avoid being seen and make it through. This is an example of a map song, where directions are coded into the lyrics.
Steal Away communicates that the person singing it is planning to escape. If slaves heard Sweet Chariot they would know to be ready to escape, a band of angels are coming to take them to freedom. Follow the Drinking Gourd suggests escaping in the spring as the days get longer.

Building LLMs that simply replicate the Delphic Oracle’s authentication model obviously means they will also inherit all its ancient vulnerabilities.

The Trojans should have listened to Cassandra.

Cassandra warned about Greek deception hidden in poetic/mythological framing (the “gift” of the horse). Yet she was dismissed because her style of delivery (prophetic frenzy) failed the authentication protocol of Trojan institutional decision-making. Like the LLMs, Troy’s gatekeepers couldn’t distinguish between surface form (friendly gift) and semantic content (military payload).

I could go on and describe how Captain Crunch in the 1970s bypassed AT&T phone toll controls (2600 Hz tone vs. poetic meter)… but you hopefully get the pattern by now that this “novel” attack paper simply reminds us of why we need more trained historians leading technology companies.

Pattern recognition across time requires historical training. Perhaps the last laugh is an indictment of the constantly deprecated technical fields that treat historical precedent as irrelevant. History is the thing that actually never goes away.

Kit Kat Death is a Tragedy. Corporate Immunity From Murder is R Street Business Model

A new Los Angeles op-ed on AV safety opens with “there’s nothing wrong with mourning” a cat, then spends the entire piece arguing that mourning should produce exactly zero policy response.

There’s nothing wrong with mourning the death of a neighborhood cat. You’ll have trouble finding someone who likes cats more than I do.

Hey, this guy says some of his best friends are cats, just so you know.

There’s nothing wrong with mourning death, according to the author, as long as the mourning doesn’t prevent more death.

Why?

He’s not saying “don’t be sad about the cat.”

He’s saying: “Accept that corporations killing things you love is the price of progress, and demanding accountability will kill more humans.”

Corporations? Like the ones funding the author, Steven Greenhut, Western region director for the right-wing extremist R Street Institute?

Is Greenhut literally being paid to normalize corporate greed to the degree of cold blooded murder for profits?

R Street receives funding from tech companies and insurers who profit directly from autonomous vehicle liability limitations, the exact policies Greenhut advocates. In fact, Google, which owns Waymo, directly funded R Street through its Google.org foundation. Greenhut isn’t just defending autonomous vehicles in the abstract. He’s defending his funders’ products.

Greenhut isn’t making policy recommendations when they’re marketing deliverables for his paycheck. You think he would give up his source of income to care about your kids or your pets being killed by it?

Extreme.

The Escalation Pattern

This is exactly the racist jaywalking playbook.

1920s: “Pedestrians are obstacles to vehicle flow” = car manufacturers criminalize non-whites for walking.

2017: “Protesters are obstacles to traffic” = oil companies propose zero liability for running over non-white protestors.

2025: “Pets are acceptable losses” = Big Tech normalizes corporate immunity for killing dehumanized targets.

Each step expands the category of acceptable targets while contracting the zone of accountability.

When Death Starts Normalizing

When Greenhut says drivers aren’t held accountable for hitting animals, he’s stating a current failure of justice as justification for systematizing that failure at corporate scale.

The argument structure is:

  • Individual drivers often escape accountability (bad)
  • Therefore corporations should definitely escape accountability (worse?)
  • This is actually good because…

The Cat Is Doing Political Work

Kit Kat isn’t just a tragic death. Kit Kat is a test case for power.

  1. If a beloved community fixture can be killed with zero consequences
  2. If police can document the violation but issue nothing
  3. If the response is memorialize but don’t regulate

Then the precedent is set: Corporate algorithmic agents can kill without legal consequence. Start with pets (aww, sad, but just animals). Move to cyclists (already happening in multiple Tesla “veered” examples). Expand to pedestrians (as overtly proposed by North Dakota government). Automate at scale (Swasticars).

Swasticars: Remote-controlled explosive devices stockpiled by Musk for deployment into major cities around the world.

Swiss Re “Data” is Dogshit

Greenhut cites “88% reduction in property damage claims” as if it’s safety data.

But as I have explained repeatedly before, like in “Waymo is Murder“: No citations = no fault documentation = fewer claims where liability is clear.

If police can’t cite the AV, victims face a “gap in accountability,” and the company controls all evidence… of course property damage claims go down.

Thank you, NOT.

That’s NOT safety.

That’s legal engineering.

Swiss Re makes money when:

  • Liability claims are minimized
  • Fault is unclear
  • Victims can’t prove responsibility
  • Payouts are smaller

The 88% reduction in property damage claims could mean AVs are safer, OR (let’s be honest) victims can’t successfully file claims against corporations with armies of lawyers and no driver to hold accountable.

Which interpretation does Swiss Re have financial incentive to heavily promote?

Greenhut presents the dogshit data as if it’s independent verification. It’s marketing for a liability model that profits insurers and manufacturers while leaving victims with “gaps in accountability.”

Woof.

The Big Conclusion Reveals Everything

Greenhut ends his piece with this advice:

When something bad happens, sometimes the best approach is doing nothing.

This is the same logic male authorities used in the 1970s when they told women not to resist rape—advice that feminist activists fought against by teaching self-defense and organizing “Take Back the Night” marches.

Where was Greenhut in 1978?

San Francisco, 1978. Source: Take Back the Night

As anyone learning the lessons of history, such as WWII and the rise of Hitler, knows about the people who said to do nothing… they were (and are) the bad guys.

Translation of Greenhut: When corporations kill without accountability, for profit, the best approach is protecting their ability to keep killing, for profit.

Every corporate atrocity in American history was enabled by people like this being paid to argue that corporate accountability would somehow be worse than mass death.

Ronald Reagan promoted big tobacco in direct opposition to 1950s cancer research, a PR campaign that caused at least 16 million American dead.

He’s clearly NOT arguing for actual safety (which would require accountability, independent verification, mandatory disclosure).

He’s arguing algorithms should be allowed to kill for profit and without any legal consequences.

And he’s using a dead pet.

Your pet could be next.

Your child on a bike could be after that.

1973 poster by Charles Boost in Amsterdam: “Hunting small game all year round. Stop killing children”

Because that’s what Tesla “veered” documentation shows already. This isn’t speculative. The escalation from pets to cyclists is already documented. Kit Kat directly connects to Allie Huggins (one of many cyclists killed by Tesla hit-and-runs).

The cat’s death isn’t a tragedy Greenhut’s able to move on from, because it’s an obstacle to corporate immunity he needs to neutralize.

That normalization is terrifying: we’ve seen this exact pattern produce ISIS recruitment pipelines, vehicular homicide proposals, and the criminalization of being a pedestrian.

Greenhut wants us to grieve Kit Kat quietly while accepting that no one will answer for corporate death for profit. Greenhut is literally paid by entities that profit from the deadly policy outcomes he advocates.

That acceptance is the foundation for algorithmic murder at scale.

US Coast Guard No Longer Approves Displays of Nazi Swastikas and KKK Nooses

The U.S. Coast Guard soon may raise the Nazi Swastika on ships, in a new ruling that the hate symbol offers them utility as a “potentially divisive” tool.

…the Coast Guard will classify the Nazi-era insignia as “potentially divisive” under its new guidelines. The policy, set to take effect Dec. 15, similarly downgrades the classification of nooses and the Confederate flag…

Nazi-era? How ironic to say that while writing about its modern utility.

Clearly divisive because they are hate symbols, enabling these things means the Coast Guard intentionally is creating a clear division between its white nationalists and everyone else.

Further clarification also claimed there was a “streamline” benefit to enabling white supremacist symbols.

In a statement attributed to Adm. Kevin Lunday, the service’s acting commandant, the Coast Guard declined to address why its new policy no longer characterizes swastikas, nooses and the Confederate flag as hate symbols. Lunday affirmed, though, that such symbols “and other extremist or racist imagery violate our core values and are treated with the seriousness they warrant under current policy.”

Later Thursday, Lunday sent the entire Coast Guard an email calling the symbols “prohibited,” but the new policy as worded left open the possibility that they could be displayed without removal. His email said the updated guidelines are meant to “streamline administrative requirements.”

Legalizing hate symbols would, indeed, reduce any requirements to address them.

Update! Reporters suggest their initial reporting of this story has worked, by exposing the need for a ban on hate symbols.

Source: Swastika