Are Tesla Cars Being Trained to Cause More Crashes?

TRIGGER WARNING: This post has important ideas and may cause the reader to think deeply about the Sherman-like doctrine of property destruction to save lives. Those easily offended by concepts like human rights should perhaps instead read the American Edition.


Let us say, hypothetically, that Tesla tells everyone they want to improve safety while secretly they aim to make roads far less safe.

Here is further food for thought. Researchers have proven that AI developed under a pretense of safety improvement can easily be flipped to do the opposite and cause massive harms. They called their report “dual-use discovery”, as if any common tool like a chef knife or a hammer are “dual-use” when someone can figure out how to weapanize things. Is that really a second discovery, the only other use option… being that it’s the worst one?

According to The Verge, these researchers took AI models intended to predict toxicity, which is billed usually as a helpful risk prevention step, and then instead trained them to increase toxicity.

It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of “bad actor” mode to show how easily it could be abused at a biological arms control conference.

Potentially lethal. Theoretically dead.

The use-case dilemma of hacking “intelligence” (processed data) is a lot more complicated than the usual debate about how hunting rifles are different from military assault rifles, or that flame throwers have no practical purposes at all.

One reason it is more complicated is America generally has been desensitized to high fatality rates from harmful application of automation machines (e.g. after her IP was stolen the cotton engine service model — or ‘gin — went from being Caty Greene’s abolitionist invention to a bogus justification for expansion of slavery all the way to Civil War). Car crashes often are treated as “individual” decisions given unique risk conditions, rather than seen as a systemic failure of a society rotating around profit from criminalization of poverty.

Imagine asking things like what is the purpose of the data related to use of a tool, measuring how is it being operated/purposed, and can a systemic failure be proven by examining it from origin to application (e.g. lawn darts or the infamous “Audi pedal“)? Is there any proof of failsafe or safety?

Lots of logic puzzles come up in threat models, which most people are nowhere near prepared to answer at the tree let alone forest level… perhaps putting us all in immediate fire danger without much warning.

Despite complexity, such problems actually are increasingly easily expressed in real terms. Ten years ago when I talked about it, audiences didn’t seem to digest my warnings. Today, people right away understand exactly what I mean by a single algorithm that controls millions of cars all simultaneously turned into a geographically dispersed “bad actor” swarm.

Tesla.

Where is this notoriously bad actor with regard to transparency and even proof on such issues? Can they prove cars are not, and can not be, trained as a country-wide loitering munition to cause mass casualties?

What if their uniquely bad death tolls already mounting are a result of them since 2016 developing AI (ignoring, allowing, enabling or performing) such that their crashes have been increasing in volume, variety and velocity due to an organized intentional disregard for law and order?

ChatGPT, what do you think, given that Tesla now claims that you were its creation?

Click to enlarge

One thought on “Are Tesla Cars Being Trained to Cause More Crashes?”

  1. All other automakers are cautious and test things before they are released, either because they care or because they don’t want to be sued out of existence. They all use the proper sensors to minimize risk.

    Tesla is forging another path by using machine learning to ‘teach’ the computer how to drive a car. This is experimental. And they are also omitting LiDAR, which all other automakers use and relying on vision only, which is known to be an unsafe and risky approach.

    Essentially, they are testing this on public roads with actual buyers. They even duplicitously call it a beta (unfinished) version for general use. All other automakers use private or closed tracks or get permits to do limited testing on public roads like Waymo and Cruise.

    IMO the FTC and NHTSA should force Tesla to stop selling any vehicles with this tech and force them to recall all older vehicles. If this is too expensive or forces them out of business then so be it. That would be better for public safety.

    Now, with OTA ‘updates’ Tesla can keep going by either lying and intentionally misleading govt agencies or with incompetence by trying and trying to get it right at the expense of public safety.

    Musk is a reckless danger. He is rich and can fight any legal challenge hiring huge armies of cheap and dumb lawyers. The govt has more resources and authority but lacks the will and competence to fight this.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.