A Simple Reason Why Tesla Keeps Crashing into Police Cars

Tesla Deaths: 207
Tesla Autopilot Deaths: 10
Ford Pinto Deaths: 27

Today at “Tesla AI Day” their engineering team on the main stage said the following, and I quote:

…we haven’t done too much continuous learning. We train the system once, fine tune it a few times and that sort of goes into the car. We need something stable that we can evaluate extensively and then we think that that is good and that goes into cars. So we don’t do too much learning on the spot or continuous learning…

That’s a huge reveal by Tesla, since it proves a RAND report right.

Autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.

On this trajectory it could take centuries before Tesla would achieve even a basic level of driving competency.

Think especially in terms of Tesla saying “we need something stable” because any hardware or software change in the “learning” system sets them backwards (which it definitely does).

The hundreds of years estimate gets longer and longer the more they push “newness” onto the road. 500 years is not an unreasonable estimate as when to expect improvement…

This brings me all the way back to the first fatality caused by Tesla “autopilot” in January 2016.

A car traveling at high speed drove without any braking straight into the back of a high-visibility service vehicle with flashing safety lights.

Source: The Sun

Did you hear about it?

I have to ask because you should be aware of the “experts” who say patently false things like this:

…when the first person was killed using the Tesla autopilot in Florida, the truck [hit by the Tesla] was perpendicular to the direction the motion. The training did not have those images at all, so therefore the pattern matcher did not recognize that pattern.

No, Florida was not the first crash.

No, the first fatal “autopilot” crash was NOT perpendicular motion. It was running into the back of a safety vehicle despite flashing lights.

No, the series of fatal perpendicular motion crashes are not about the failure to recognize a pattern or failure of training (at least three so far, all using different hardware and software).

In fact, the Florida crash was fatal *because* Tesla recognized a pattern (thought it saw an overhead sign common in California highways near Tesla HQ).

After nine seconds of 70mph downhill the Tesla in Florida shifted lanes left to right in an attempt to drive under a trailer in between the wheels — that is obviously *because* of pattern recognition.

Let me put it this way: people probably would have survived their Tesla crash, if only the car had been blind instead or if they had “autopilot” disabled.

Getting little details about Tesla like these right is super important because there have been more and more crashes very much like the actual first one FIVE YEARS AGO.

Talk about “we don’t do too much learning”!

Bottom line is the more Tesla on the road, despite a commonly stated fallacy of “learning” or “training”, the more crashes.

Thus, DO NOT believe anyone (expert or otherwise) who says “learning” is the answer to safety, unless by learning they mean regulators start learning how to hold Elon Musk accountable for lying about safety.

Back to the article about the crash in January 2016, it also makes a glaring error.

Company founder Elon Musk said the firm was in the process of making improvements to its auto pilot system aimed at dramatically reducing the number of crashes blighting the model S.

Elon Musk did not instigate the company. Technically Martin Eberhard and Marc Tarpenning founded it in July 2003.

Musk joined them and invested his $6 million, basically stolen from PayPal, then used lies and exaggerations to push out the actual founders.

That’s an important point because it sets some context for why Tesla hasn’t improved, lacking its original idea people.

Fast forward to today and the number of Tesla “autopilot” crashes reported by the news has only increased dramatically, completely opposite from that “process” Musk claimed he was launching.

Federal safety regulators are investigating at least 11 accidents involving Tesla cars using Autopilot or other self-driving features that crashed into emergency vehicles when coming upon the scene of an earlier crash.

In 2021 Federal safety regulators are investigating a series of crashes that go right back to the very first 2016 crash conditions.

See something fishy in that FIVE YEAR timeline?

We have seen a lot of accidents and very little investigation for a system that is supposedly “learning”.

The difference now seems to be that regulators realized what I’ve been saying here and in my presentations (very out loud) since 2016: Tesla’s negligence means more people being killed who aren’t inside a Tesla.

…“buyer beware” defense has been voiced loudly by Tesla’s defenders after previous crashes have grabbed headlines, such as one in Texas earlier this year in which two individuals inside a Tesla were incinerated (neither was reportedly in the driver’s seat). It’s impossible to claim consent exists for a first responder—or for anyone else struck by a Tesla driver.

This is a big deal, because it breaks with auto safety’s traditional orientation toward vehicle occupants.

So I was right all this time?

The data shows, as I predicted, that Tesla isn’t actually improving at avoiding crashes over time and just getting worse and worse.

The more cars they put on the road, the more tragedies. That is how scams work, which is also how I was able to reliably predict where we are now.

I have so many sad examples. Tesla data is not pretty.

Tesla’s Driver Fatality Rate is more than Triple that of Luxury Cars (and likely even higher)

And the causality is ugly too.

Autosteer is actually associated with an increase in the odds ratio of airbag deployment by more than a factor of 2.4

Tesla’s “autopilot” causes more crashes.

If you want to prove me (really the data) wrong, here’s an open job for you.

Source: LinkedIn

The explanation for such a devolution in the data is unfortunately rather simple.

Tesla “learning” is untrue. “Autopilot” is untrue. It’s all been a scam.

We knew this as soon as their “1.0” became “2.0” and the “autopilot” capabilities were far worse, even crossing double-yellow lines.

In reality, the “new version” reflected Tesla going backwards and losing a serious ethics dispute with engineers building the actual “autopilot”:

The head of driver-assistance system maker MobilEye has said that the company ended its relationship with Tesla because the firm is “pushing the envelope in terms of safety.” […] Given how instrumental MobilEye was in developing Autopilot, it’s a surprise to see Sashua effectively talk down his company’s product.

Talk down? I’m sorry, that’s Sashua calling Elon Musk a dangerous exaggerator and liar. It should not have been any surprise.

There is no rational reason to believe any update you are getting from Tesla will be making anyone safer, as it could actually be making us all more at risk of injury or death.

A new IEEE paper explains:

These results dramatically illustrate that testing a single car, or even a single version of deployed software, is not likely to reveal serious deficiencies. Waiting until after new autonomous software has been deployed find flaws can be deadly and can be avoided by adaptable regulatory processes. The recent series of fatal Tesla crashes underscores this issue. It may be that any transportation system (or any safety-critical system) with embedded artificial intelligence should undergo a much more stringent certification process across numerous platforms and software versions before it should be released for widespread deployment.

Again, after the first fatality in January 2016 the talk track was a “dramatically reducing the number of crashes” and we’ve seen anything but that.

In one case a Tesla was pulled over by police because they didn’t see anyone in the driver seat. After being pulled over the Tesla started moving again and crashed into the stopped police car.

The lies and inability to really learn are a continuous disappointment, of course, to those who want to believe machines are magic and things naturally get better over time if large amounts of money and ego are involved.

…it’s obviously a very hard problem, and no one is expecting Tesla to solve it any time soon. That’s why it’s so confusing that Musk continues to make promises the company can’t keep. Perhaps it’s meant to create hype and anticipation, but really it’s an unforced error that does nothing but erodes trust and credibility.

Even Tesla’s head of autopilot software, CJ Moore, has made it clear that Musk’s claim about self-driving capabilities “does not match engineering reality.” In addition, in a memo to California’s Department of Motor Vehicles, Tesla’s general counsel said that “neither Autopilot nor FSD Capability is an autonomous system, and currently no comprising feature, whether singularly or collectively, is autonomous or makes our vehicles autonomous.”

If you have studied dictatorships run by alleged serial liars like Elon Musk, you might recognize all the hallmarks of why things fall apart instead of improving. Federal regulation is long overdue as many deaths could have been prevented.

Let me conclude by saying we’ve had the answers to these problems for centuries. For example we need to stop calling the rules some kind of edge case.

The rule is to stop at a stop sign, yet companies like Tesla fail at this and instead try to use propaganda to convince people that their failures are are edge cases because we don’t see them very often.

Edge is not defined solely by frequency. You can’t drive if you can’t stop at a stop sign regardless of how far away it is from your starting point.

You could start using a camera at birth to record everything such that a machine has seen everything you have. Teenage drivers today could have been doing this for the past 10 years (same age as Tesla) but that still wouldn’t really help as we see the first fatality from “autopilot” was a high visibility service vehicle with flashing lights in January 2016.

Learning is not the problem, given Tesla has had 5 years of bazillions of combined miles and still can’t tell a police car from an open road.

In other words it’s not that the car has or hasn’t seen all those things, it is that driving means training in a way that doesn’t require the data set expansion.

Wollstonecraft gave us this philosophy in the 1790s when she said women and blacks are equal to white men. She clearly could see the future of harm avoidance without the data set.

Tesla keeps crashing in a similar way that a racist white police officer won’t stop assaulting innocent black people. A racist does NOT have to meet every black person in the world to stop blindly causing harm, and yet some racists never learn no matter how many people they meet.

Let’s be honest here. There is NO necessity to “see all there is to see” to be a safe driver.

This has been debunked since at least the late 1700s by philosophers, and repeatedly proven with basic science. Nobody in their right mind believes you have to sample every molecule in the world to predict whether it’s going to rain.

I say that in all sincerity as it’s a truism of science. Yet the “driverless” industry has been pouring money into dead-end work, trying to prove science wrong by paying people $1/hour to classify every raindrop.

“For example, if it’s drizzling, all the cameras are so strong that they can capture the tiniest water drop in the atmosphere.” In a category called “atmospherics,” workers may be asked to label each individual drop of water so the cars don’t mistake them for obstacles.

Such a mindset is an artifact of people trying to solve a problem the wrong way. And it should be increasingly obvious it is not how the problem actually will be solved.

If it were true that a good driver had to learn a large number of rare events to become experienced, it would not be possible for any human to be classified as a good driver. Humans barely get to a million miles in their lifetime and even can reach “experienced” status before 100,000 miles. Human drivers literally prove to achieve good driver level does NOT require experience with a large number of rare events.

Again, this has been known and proven philosophy since the 1700s. Done and dusted.

More proof is that Waymo claims to have around 20 million “autonomous” miles yet can’t deny they are nowhere near ready for wide deployment.

The way things are being done (especially by Tesla, yet also basically everyone else) is not actually yielding utility in transportation safety (or efficiency). As a basic exercise in economics there are FAR more useful ways to spend the enormous amount of money, talent, resources etc. being devoted to such a broken status quo. Instead of Tesla, can you imagine if all that money had been spend on better rail?


Update August 30: More statistics in a new post, as I explore why Toyota cancelled their autonomous driving project after just one injury.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.