Tesla Crash Data Surged 118% 2024

New data and analysis from LendingTree reveals an alarming surge in Tesla crashes, raising serious questions about the intersection of fraudulent autonomous driving claims and impaired drivers.

To begin, Tesla is ranked the worst crash data car in nine states, corresponding to it being the worst where it sells the most.

Overall the Tesla incident rate increased by 18.7% from the previous period (31.13 to 36.94 per 1,000 drivers). Its current incident rate is 16.1% higher than the average among the top 10 brands.

Tesla is clearly called out by the report as the brand with the worst rate of incidents.

  • Tesla has the highest crash rate at 26.67 per 1,000 drivers
  • 13.3% increase from the previous period
  • Significantly higher crash rate than other car brands

Tesla DUI Numbers Most Shocking

Tesla’s rate related to DUI stands out alone among all brands. It skyrocketed by 118% in just one year, climbing from 1.02 to 2.23 incidents per 1,000 drivers.

This dramatic increase comes against a backdrop of controversial messaging about Tesla’s autonomous capabilities, including CEO statements since 2016 suggesting drivers can remove their hands from the wheel or even sleep while the car is in motion.

Autonomous Driving Lies

Tesla’s CEO has made numerous public statements suggesting drivers are safer letting their vehicle alone handle complex driving situations.

These claims sharply contrast with Tesla’s official statements, which require drivers to remain alert and maintain control of the vehicle at all times.

The combination of these two, as known in disinformation analysis, is the worst of all possible options. An official statement is far less likely to be heeded if the CEO contradicts it openly. If there was no official warning it would be better, because the CEO would be believed less.

Killer Lies

The combination of messaging about autonomous capabilities and rising DUI rates suggests a potentially lethal misconception: that Tesla’s autonomous features make it safer to drive while impaired. This dangerous assumption ignores critical facts:

  1. Tesla’s autonomous features require an alert, sober driver
  2. No current autonomous system is approved for operation by an impaired driver
  3. Autopilot and Full Self-Driving features are driver assistance tools, not replacements, and dangerously inferior (e.g. camera-only) compared to most other car brands

National Security Implications

Tesla owners face potentially higher insurance premiums due to rising incident rates, affecting everyone. DUI charges apply regardless of autonomous feature usage, and legal experts warn that autopilot usage doesn’t provide any defense in DUI cases.

This means, despite fancy technological double-speak in public statements, the fundamental rules of safe driving remain in direct contradiction to the Tesla CEO’s wildly unaccountable and untrustworthy statements.

  • Never drive under the influence
  • Always maintain full attention on the road
  • Don’t rely on autonomous features for impaired driving
  • Arrange alternative transportation when drinking

As Tesla’s DUI rates continue to climb, it’s crucial to remember that no amount of technology can make impaired driving safe. The responsibility for safe operation remains with the driver, regardless of autonomous capabilities or public statements suggesting otherwise.

Tesla Crashes Through FAA Fence, Stopped by Tree

Again, questions are being asked about Tesla “driverless” due to a early morning crash into a tree at high speed, after ignoring a FAA facility fence.

INCIDENT DATE/TIME: 2-8-25 6:20 am
LOCATION: 9175 Kearny Villa Rd (FAA)
AREA/CITY: San Diego
DETAILS:
— The male driver of the Tesla was driving northbound on Kearny Villa Rd when he lost control of the vehicle.
— He crashed through the barriers and fencing that protects the FAA building.

DeepSeek Jailbreaks and Power as Geopolitical Gaming

Coverage of AI safety testing reveals a calculated blind spot in how we evaluate AI systems – one that prioritizes geopolitical narratives over substantive ethical analysis.

A prime example is the recent reporting on DeepSeek’s R1 model:

DeepSeek’s R1 model has been identified as significantly more vulnerable to jailbreaking than models developed by OpenAI, Google, and Anthropic, according to testing conducted by AI security firms and the Wall Street Journal. Researchers were able to manipulate R1 to produce harmful content, raising concerns about its security measures.

At first glance, this seems like straightforward security research. But dig deeper, and we find a web of contradictions in how we discuss AI safety, particularly when it comes to Chinese versus Western AI companies.

The same article notes that “Unlike many Western AI models, DeepSeek’s R1 is open source, allowing developers to modify its code.

This is presented as a security concern, yet in other contexts, we champion open-source software and the right to modify technology as fundamental digital freedoms. When Western companies lock down their AI models, we often criticize them for concentrating power and limiting user autonomy. Even more to the point many of the most prominent open source models are actually from Western organizations! Pythia (Eleuther AI), OLMo (AI2), Amber and CrystalCoder (LLM360), T5 (Google), Bloom (BigScience), Starcoder2 (BigCode), and Falcon (TII) to name a few.

Don’t accept an article’s framing of open source as “unlike many Western AI” without thinking deeply why they would say such a thing. It reveals how even basic facts about model openness and accessibility are mischaracterized to spin a “China bad” narrative.

Consider this quote:

Despite basic safety mechanisms, DeepSeek’s R1 was susceptible to simple jailbreak techniques. In controlled experiments, the model provided plans for a bioweapon attack, crafted phishing emails with malware, and generated a manifesto containing antisemitic content.

The researchers focus on dramatic but relatively rare potential harms while overlooking systemic issues built into AI platforms by design. We’re more concerned about the theoretical possibility of a jailbroken model generating harmful content than we are about documented cases of AI systems causing real harm through their intended functions – from hate speech and influencing suicides through chatbot interactions to autonomous vehicle accidents.

The term “jailbreak” itself deserves scrutiny. In other contexts, jailbreaking is often seen as a legitimate tool for users to reclaim control over their technology. The right-to-repair movement, for instance, argues that users should have the ability to modify and fix their devices. Why do we suddenly abandon this framework when discussing AI?

DeepSeek was among the 17 Chinese firms that signed an AI safety commitment with a Chinese government ministry in late 2024, pledging to conduct safety testing. In contrast, the US currently has no national AI safety regulations.

The article presents a concerning lack of safety measures, while simultaneously presenting safety commitments, while criticizing the model for being too easily modified to ignore safety commitments. This head-spinning series of contradictions reveals how geopolitical biases can distort our analysis of AI safety.

We need to move beyond simplistic Goldilocks narratives about AI safety that automatically frame Western choices as inherently good security measures while Chinese choices can only be either too restrictive or too permissive. Instead, we should evaluate AI systems based on:

  1. Documented versus hypothetical harms
  2. Whether safety measures concentrate or distribute power
  3. The balance between user autonomy and preventing harm
  4. The actual impact on human wellbeing, regardless of the system’s origin

The criticism that Chinese AI companies engage in speech suppression is valid and important. However, we undermine this critique when we simultaneously criticize their systems for being too open to modification. This inconsistency suggests our analysis is being driven more by geopolitical assumptions than by rigorous ethical principles.

As AI systems become more prevalent, we need a more nuanced framework for evaluating their safety – one that considers both individual and systemic harms, that acknowledges the legitimacy of user control while preventing documented harms, and that can rise above geopolitical biases to focus on actual impacts on human wellbeing.

The current discourse around AI safety often obscures more than it reveals. By recognizing and moving past these contradictions, we can develop more effective and equitable approaches to ensuring AI systems benefit rather than harm society.

Hegseth Gross Violation of Effciency Orders: Millions Wasted on Frivolous Base Name Change

It would be funny if it wasn’t so blatantly stupid. Imagine if a childish racist troll was put in charge of the U.S. military, one who defies orders let alone common decency, and you get

Secretary of Defense Pete Hegseth signed a memorandum renaming Fort Liberty in North Carolina to Fort Bragg. The new name pays tribute… [to those who will profit in repeatedly changing base names instead of fixing actual base needs].

We’re talking about a base in need of actual fixes not performative ones, like some solutions to problems widely reported (e.g. black mold).

Last year at Fort Bragg, roughly 1,100 soldiers had to be relocated to new living quarters due to spiraling mold issues in the damp climate. Twelve barracks at Fort Bragg were set to be demolished this year, five years ahead of schedule, and an additional five are set to be remodeled.

Garrison officials at Bragg scrambled over the summer to accommodate the displaced soldiers after the barracks were effectively condemned. Roughly half were given a housing allowance, a move typically reserved for higher-ranking or married troops.

Instead Hegseth, famously thrown out of the military for spending too much time and money on white supremacist tattoos, has now focused the entire nation’s time and money on… expensive and demoralizing reversal of replacing letters on signs.

The service spent … more than $2 million for Fort Liberty [renaming it from racist Confederate traitor Bragg]

Putting the military into full retreat, making base names anti-American again, is a direct violation of the White House efficiency mandates and should get Hegseth immediately dismissed for gross insubordination.

As some have pointed out, the Liberty name may have been a tactical weakness and concession, blocking a proper tribute to a soldier. It was meant to weaken the base name intentionally so a case would be made later for a different tribute. That gives too much credit, as if there was a long plan by white supremacists in the armed forces to one day remove Liberty literally and metaphorically.

Even so, the change to Liberty was lawful and settled. On top of that there have been orders from the top for absolute efficiency. This rushed move now to turn a long racist plan into an immediate one wasting millions must be the most inefficient order imaginable.

And the mold. What about the mold?