The fact that Tesla doesn't have a process for making crash data available to investigators is pretty indefensible IMO, given they're retaining that data for their own analysis. Would be one thing if they didn't save the data for privacy reasons, but if they have it, and there's a valid subpoena, they obviously need to hand it over.
For context though, note that this crash occurred because the driver was speeding, using 2019 autopilot (not FSD) on a city street (where it wasn't designed to be used), bending down to pick up a phone he dropped on the floor, and had his foot on the gas overriding the automatic braking: https://electrek.co/2025/08/01/tesla-tsla-is-found-liable-in... The crash itself was certainly not Tesla's fault, so I'm not sure why they were stonewalling. I think there's a good chance this was just plain old incompetence, not malice.
The article explains that the crash snapshot shows:
- hands off wheel
- autosteer had the steering wheel despite a geofence flag
- no take-over warnings, despite approaching a T intersection at speed
Letting people use autopilot in unsafe conditions is contributory negligence. Given their marketing, that's more than worth 33% of the fault.
That they hid this data tells me everything I need to know about their approach to safety. Although nothing really new considering how publicly deceitful Musk is about his fancy cruise-control.
That seems contrary to my experience. Large, powerful bureaucracies are often highly incompetent in ways that clearly work against their own interests.
I guess you could go even more nuanced and say sometimes incompetence from a position of power is a choice, and I would agree with that, but now the statement seems so watered down as to be almost meaningless.
I feel like this is getting far too abstract to the point that you’re actively losing sight of a very real, very concrete and very specific set of actions they took which don’t appear to have any credible and innocent motives but also happen to perfectly align with why by all reasonable definitions would be considered malicious.
Trillion-dollar companies run by egomaniacal billionaires do not need you rushing to your keyboard to make excuses for them.
A corporation can hire people and put processes in place to arbitrarily minimize (or not) the chance of an mistake in areas that matter to them. In this case, they did just that; only the thing being optimized for was “not giving data to the authorities”.
The evidence of this trial does not support an “oopsie poopsie we messed up so sowwy” interpretation of events. Tesla’s paid representatives went out of their way—repeatedly—to lie, mislead, and withhold evidence in order to avoid scrutiny. Fuck them and everyone involved with that.
But Tesla has sufficient power that they do not have the luxury of pleading incompetence when things like this happen.
This is both because such incompetence costs people's lives, and because they have enough money that they could definitely hire more or better people to re-check and add redundant safety features into their products.
The problem is, they do not want any accountability for claiming that their cars are "self-driving", or for any of their other errors or willful endangerment of the public.
> Update: Tesla’s lawyers sent us the following comment about the verdict:
> Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.
---
Personally, I don't understand how people can possibly be happy with such verdicts.
Recently in 2025, DJI got rid of their geofences as well, because it's the operator's responsibility to control their equipment. IIRC, DJI did have support of the FAA in their actions of removing the geofencing limitations. With FAA expressly confirming that geofencing is not mandated.
These sorts of verdicts that blame the manufacturer for operator errors, are exactly why we can't have nice things.
It's why we get WiFi and 5G radios, and boot loaders, that are binary-locked, with no source code availability, and which cannot be used with BSD or Linux easily, and why it's not possible to override anything anywhere anymore.
Even as a pedestrian, I'm glad that Tesla is fighting the good fight here. Because next thing I know, these courts will cause the phone manufacturers to disable your phone if you're walking next to a highway.
So, we have to ignore the entire safety record for the entire technology just because one operator has failed to follow the instructions?
This is especially the case for something that was in its infancy back in 2019 when this crash happened.
And you know what we have in 2025 because of those restrictions being enforced since then?
In 2025, Tesla's nag drivers so much, for not paying attention to the road, that drivers no longer keep the much safer versions of autopilot engaged at all, when looking for their phones.
Instead, now, because the issue is "fixed", Tesla drivers simply do the same thing what drivers of any other car do in the situation.
They disable autopilot first, and only then stop paying attention to the road, looking for their phone.
How's that safer?
We're precisely less safe because of these regulatory requirements.
(And, add insult to injury, this court is now using the hindsight 20/20, of these warnings subsequently being implemented, as evidence of Tesla's wrongdoing in 2019, at a time before anything like that was thought to be possible? Even though, now that these warnings were implemented, we already have evidence that these nags themselves make everybody less safe, since autopilot is simply turned off when you need to stop paying attention to the road?)
What safety record are we ignoring? Can you please cite some scientifically rigorous and statistically sound data, evidence, and analysis?
Or are you talking about self-published numbers by the company that is proven to withhold, lie, and misdirect in even official police investigations, subpoenas and trials, let alone when it is not actively illegal to do so?
Are we talking numbers with a degree of scientific rigor unfit for publication in a middle school science fair, let alone the minimum standard of scientifically rigorous that members of their team had to achieve to get their degrees, yet somehow fail to do when detailing systems that are literally responsible for the life and death of humans?
And would the driver’s actions have been different if they had understood that? Was their lack of understanding coincidence, correlated with their Tesla ownership by no fault of Tesla, or deliberately engineered by Tesla’s marketing approach?
An average of 100 people die every day in the US due to traffic accidents, many of which would have been prevented by Tesla-like software. You're obsessing about the wrong side of the equation.
You know, I've talked to a whole bunch of people who actually own Tesla's, who actually work in tech, and most of them are completely unaware about any of these autopilot features whatsoever.
Most people are actually very dismissive of autopilot, and are completely misinformed of the benefits / drawbacks / differences of "FSD" versus "Autopilot".
Most are completely unaware of the improvements of v12 or v13, or differences between HW3 or HW4, or which one they have, or that "autopilot" is free, or circumstances under which autopilot can be used etc.
I talked to some guy last year who was actually paying $199/mo for FSD v12, before the price drop to $99/mo, and swearing how great it was, yet he has never tried the parking feature, even though it's been released several months prior. He's a software engineer. That's just one example.
So, if anything, Tesla's marketing is nowhere near as successful as these naysayers would make you believe. Because the vast majority of Tesla's customers are actually far behind on autopilot or FSD buy-in, and are NOT aware of the progress.
the great thing is that we live in a society controlled by laws and corporations cant get away with testing everything they want on public roads. your freedom or desire for “responsibility” doesn’t negate others’ rights
I own a Tesla, and here's my take on the biggest software issue:
Normal consumers don't understand the difference between "Autopilot" and "FSD".
FSD will stop at intersections/lights etc - Autopilot is basically just cruise control and should generally only be used on highways.
They're activated in the same manner (FSD replaces Autopilot if you pay for the upgrade or $99/month subscription), and again for "normal" consumers it's not always entirely clear.
A friend of mine rented a Tesla recently and was in for a surprise when the vehicle did not automatically stop at intersections on Autopilot. He said the previous one he rented had FSD enabled, and he didn't understand the difference.
IMO Tesla just needs to phase out 2019 AP entirely and just give everyone some version of FSD (even if it's limited), or geofence AP to highways only.
The fundamental problem here is that the way it's presented caused the driver to trust it in a fashion he should not have. The jury slammed Tesla for overpromising, and for trying to hide the evidence.
The article claims that the software should have been geo fensed in that area but Tesla failed to do that, that the software should have trigger warnings of collisions but it did not do that. So there were things Tesla wanted to hide.
I don't necessarily disagree, but I personally find these "but you theoretically could have done even more to prevent this"-type arguments to be a little dubious in cases where the harm was caused primarily by operator negligence.
I do like the idea of incentivizing companies to take all reasonable steps to protect people from shooting themselves in the foot, but what counts as "reasonable" is also pretty subjective, and liability for having a different opinion about what's "reasonable" seems to me to be a little capricious.
For example, the system did have a mechanism for reacting to potential collisions. The vehicle operator overrode it by pushing the gas pedal. But the jury still thinks Tesla is still to blame because they didn't also program an obnoxious alarm to go off in that situation? I suppose that might have been helpful in this particular situation. But exactly how far should they legally have to go in order to not be liable for someone else's stupidity?
>I don't necessarily disagree, but I personally find these "but you theoretically could have done even more to prevent this"-type arguments to be a little dubious in cases where the harm was caused primarily by operator negligence.
The article says that soem government agency demanded Tesla to actually geofense the areas Tesla claims their software is incapable to handle. I am not a Tesla owner and did not read the small fonts manual, do Tesla reserve the rights that they might also not sound the alarm when the car is going at speed straight into an other car while a driver is not having the hands on the wheel? sounds bad, the driver is not steering, the car is driving on an area where it is incapable of driving still and it is heading into a obstacle and the alarm is not sounding (still from the article it seemed like this was a glitch that they were trying to hide, and that this was not supposed to happen)
Anyway Tesla was forced to show the data, they did tried to hide it, so even if fanboys will attempt to put the blame `100% on the driver the jurry and Tesla 's actions tell us that the software did not function as adevertised.
As long as there is no criminal liability for people doing this, nothing will change. This is pocket change for a company, rounding error, as Tesla's valuation has gone significantly since this happened in 2019, six years ago.
I think OP's point is - the fine is not large enough to impact Tesla's stock price - which is all Tesla cares about.
It also didn't really seem to impact Tesla's decision to keep pushing Full Self Driving and Robotaxi despite it having obvious severe flaws (because Tesla sees this rollout as something holding up its stock price).
This seems pretty dumb of Tesla, as I find it rather moot to the conclusion of fault in the accident. The obstruction of justice is damning.
Autopilot is cruise control. When you understand this, claiming that Tesla is partially at fault here does not match the existing expectations of other driver assistance tech. Just because Tesla has the capability of disabling it doesn't mean they have to.
This all comes down to an interpretation of marketing speak. If you believe "autopilot" is misleading you'd agree with the jury here, if you don't you wouldn't. I'm no lawyer, and don't know the full scope of requirements for autopilot like features, but it seems that Tesla is subject to unfair treatment here given the amount of warnings you have to completely ignore and take no responsibility for. I've never seen such clear warnings on any other car with similar capabilities. I can't help but think there's maybe some politically driven bias here and I say that as a liberal.
Happy to be convinced otherwise. I do drive a Tesla, so there's that.
> This all comes down to an interpretation of marketing speak. If you believe "autopilot" is misleading you'd agree with the jury here, if you don't you wouldn't. I'm no lawyer, and don't know the full scope of requirements for autopilot like features, but it seems that Tesla is subject to unfair treatment here given the amount of warnings you have to completely ignore and take no responsibility for. I've never seen such clear warnings on any other car with similar capabilities. I can't help but think there's maybe some politically driven bias here and I say that as a liberal.
And that's exactly why the law is supposed to have a Reasonable Person Standard.
When the majority of Tesla's owners are completely unaware of the viability of autopilot even in 2025, how exactly does it make any sense to blame the marketing when someone was so entrusting in the unproven technology back in 2019? Especially given so many reports of so many people being saved by said technology in other circumstances?
I imagine these things will get better when courts would not be able to find jurors that are unfamiliar with the attention-monitoring nags that Tesla's are famous for.
Do you think Tesla spends more time and money on making their warnings convincing, or making their marketing convincing? If a person is hearing two conflicting messages from the same group of people, they'll have to pick one, and it shouldn't be surprising if they choose to believe the one that they heard first and that was designed by professionals to be persuasive.
In other words, if you bought the car because you kept hearing the company say "this thing drives itself", you're probably going to believe that over the same company putting a "keep your eyes on the road" popup on the screen.
Of course other companies have warnings that people ignore, but they don't have extremely successful marketing campaigns that encourage people to ignore those warnings. That's the difference here.
I might challenge with "autopilot is cruise control." To me, Tesla is marketing the feature much differently. Either way, looking up the definitions of each:
"Auto Pilot: a device for keeping an aircraft or other vehicle on a set course without the intervention of the pilot."
"Cruise Control: an electronic device in a motor vehicle that can be switched on to maintain a selected constant speed without the use of the accelerator."
You are right, but unfortunately you are the least useful right, which is technical right.
That is definitely what auto pilot means in the aeronautical and maritime sphere.
But a lot of the general public has a murky understanding of how an auto pilot on a ship or a plane works. So for a lot, probably the majority of them. They will look at the meaning of those two words and land on that auto pilot, means automatic pilot. Which basically ends up beeing self driving.
Sure in a perfect world, they would look up what the term means in the sphere they do not know, and use it correctly, but that is not the world we live in.
We do not get the general public, we want, but we have to live with the one we got.
In both cases, they are driver assistance. A pilot is responsible and must monitor an autopilot system in a plane. We license drivers and pilots and the responsibility is placed on them to understand the technology before using it and putting themselves and others at risk.
Would Boeing or John Deere be responsible for marketing language or just the instruction manual. We know the latter is true. It's there any evidence of the former? Intuitively I would say it's unlikely we'd blame Boeing if a pilot was mislead by marketing materials. Maybe that has happened but I haven't found anything of that sort (please share if aware).
The difference is in the sheer amount of training pilots have to go through, and the regulations that they, and their employers, are required to follow. This is tremendously different from a car that throws up a couple of warnings that can be quickly and passively acknowledged prior to your using "autopilot".
Would Boeing or John Deere be responsible for marketing language or just the instruction manual. We know the latter is true
Actually, the former is true. Courts and juries have repeatedly held that companies can be held responsible for marketing language. They are also responsible for the contents of their instruction manual. If there are inconsistencies with the marketing language it will be held against the company because users aren't expected to be able to reconcile the inconsistencies; that's the company's job. Thus, it's irrelevant that the small print in the instruction manual says something completely different from what all the marketing (and the CEO himself) says.
The "autopilot is limited" argument would have worked 20 years ago. It doesn't today. Modern autopilots are capable of maintaining speed, heading, takeoff, and landing so they're not just pilot assistance. They're literally fully capable of handling the flight from start to finish. Thus, the constant refrain that "autopilot in cars is just like autopilot in planes" actually supports the case against Tesla.
the Center for Science in the Public Interest filed a class-action lawsuit
The suit alleges that the marketing of the drink as a "healthful alternative" to soda is deceptive and in violation of Food and Drug Administration guidelines.
Coca-Cola dismissed the allegations as "ridiculous," on the grounds that "no consumer could reasonably be misled into thinking Vitaminwater was a healthy beverage"
Interesting case but I'm not sure it's apples to apples.
One, you don't need a license to buy a non alcoholic beverage. Two, while the FDA has clear guidelines around marketing and labeling, I'm not aware of any regulatory body having clear guidelines around driver assistance marketing. If they did it wouldn't be controversial.
> Autopilot is cruise control. When you understand this, claiming that Tesla is partially at fault here does not match the existing expectations of other driver assistance tech.
The problem is for several years they actively targeted a customer base incapable of understanding the limitations of the mis-named system they advertised. (Those customers capable of understanding it were more likely to buy vehicles from brands who advertised more honestly.) While the current approach of targeting Nazi and friend-of-Nazi customers might eventually change the story (with its own risks and downsides, one imagines), for the time being it seems reasonable that Tesla bear some responsibility for the unsafe customer confusion they actively courted.
The one you accept when you first turn it on. And the numerous ones you ignored/neglected to read when using features without understanding them.
This is the responsibility of a licensed driver. I don't know how a Mercedes works, but if I crash one because I misused a feature clearly outlined in their user manual, Mercedes is not at fault for my negligence.
If the feature you misused wasn't a part of your driver's ed class/driver's license test, and was dangerous enough to cause a crash if used improperly, perhaps Mercedes is at fault (to whatever degree) because they didn't do enough to ensure that drivers knew how to use it. Yes, technically, the driver may be at fault because, well... they're the driver, but this isn't something that is "either/or" - both can be at fault.
Drivers need to be paying attention, but is it not possible that Tesla could also do more to make things clear?
If this is the 300M jury case 100% they will win in appeals. The driver is clearly responsible for driving and there’s never a moment of doubt about it with Autopilot
Wouldn’t you be shocked to learn the guy with the username that specifically goes out of their way to say how much they know doesn’t actually know a damned thing about what they are talking about.
Tesla's not being treated unfairly. It advertised Autopilot as having more capabilities than it actually did. Tesla used to sell Autopilot as fully autonomous. ("The driver is only there for legal reasons.")
And it didn't warn users about this lack of capabilities until it was forced to do so.
Those warnings you're talking about were added after this accident occurred as part of a mandated recall during the Biden administration.
> but it seems that Tesla is subject to unfair treatment here given the amount of warnings you have to completely ignore and take no responsibility for.
Lol is this for real? No amount of warnings can waive away their gross negligence. Also, the warnings are clearly completely meaningless because they result in nothing changing if they are ignored.
> Autopilot is cruise control
You're pointing to "warnings" while simultaneously saying this? Seems a bit lacking in self awareness to think that a warning should muster the day, but calling cruise control "autopilot" is somehow irrelevant?
> I can't help but think there's maybe some politically driven bias here
> they result in nothing changing if they are ignored.
That’s not true
> Do I still need to pay attention while using Autopilot?
> … Before enabling Autopilot, you must agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your vehicle.” Once engaged, Autopilot will also deliver an escalating series of visual and audio warnings, reminding you to place your hands on the wheel if insufficient torque is applied. If you repeatedly ignore these warnings, you will be locked out from using Autopilot during that trip.
> If you repeatedly ignore the inattentive driver warnings, Autosteer will be disengaged for that trip. If you receive several ‘Forced Autopilot Disengagements’ (three times for vehicles without a cabin camera and five times for vehicles with a cabin camera), Autosteer and all features that use Autosteer will be temporarily removed for approximately one week.
And you don't respond to your own point about it being called autopilot despite it not being an autopilot
>> If you repeatedly ignore the inattentive driver warnings, Autosteer will be disengaged for that trip. If you receive several ‘Forced Autopilot Disengagements’ (three times for vehicles without a cabin camera and five times for vehicles with a cabin camera), Autosteer and all features that use Autosteer will be temporarily removed for approximately one week.
There are videos of people on autopilot without their hands on the wheel...
> And you don't respond to your own point about it being called autopilot despite it not being an autopilot
I don’t follow what you mean here? Are you confusing me with someone else?
> There are videos of people on autopilot without their hands on the wheel...
You can definitely remove your hands momentarily. I’ve seen people apply a weight to the steering wheel to fool it too. Not sure how people defeating the safety features would be Tesla’s fault.
What part of how autopilot is marketed do you find to be gross negligence?
I would ask, what is the existing definition of autopilot as defined by the FAA? Who is responsible when autopilot fails? That's the prior art here.
Additionally if NTSB failed to clearly define such definitions and allowments for marketing, is that the fault of Tesla or the governing body?
I'm pretty neurotic about vehicle safety and I still don't think this clearly points to Tesla as being in the wrong with how they market these features. At best it's subjective.
"it's never the crime... its the cover up". So in this case, they are kinda screwed.
I've owned two Tesla's ( now a Rivian/Porsche EV owner). Hands down Tesla has the best cruise control technology in the market. There-in lies the problem. Musk constantly markets this as self driving. It is NOT. Not yet at least. His mouth is way way way ahead of his tech.
Heck, stopping for a red light is a "feature", where the car is perfectly capable of recognizing and doing so. This alone should warrant an investigation and one that i completely, as a highly technical user, fell for when i first got my model 7 delivered... Ran thru a red light trying out auto pilot for the first time.
I'm honestly surprised there are not more of these lawsuits. I think there's a misinterpretation of the law by those defending Tesla. The system has a lot of legalese safe-guards and warnings. But the MARKETING is off. WAY OFF. and yes, users listen to marketing first.
and that ABSOLUTELY counts in a court of law. You folks would also complain around obtuse EULA, and while this isn't completely apples to apples here, Tesla absolutely engages in dangerous marketing speak around "auto pilot". Eliciting a level of trust for drives that isn't there, and they should not be encouraging.
So sorry, this isn't a political thing ( and yes, disclaimer, also a liberal).
Signed... former Tesla owner waiting for "right around the corner" self driving since 2019...
Are there clear guidelines set for labeling and marketing of these features? If not, I'm not sure how you can argue such. If it was so clearly wrong it should have been outlined by regulation, no?
> Tesla fans need to do a quick exercise in empathy right now. The way they are discussing this case, such as claiming the plaintiffs are just looking for a payout, is truly appalling.
> You should put yourself in the family’s shoes. If your daughter died in a car crash, you’d want to know exactly what happened, identify all contributing factors, and try to eliminate them to give some meaning to this tragic loss and prevent it from happening to someone else.
> It’s an entirely normal human reaction. And to make this happen in the US, you must go through the courts.
This (especially the very last point) is crucial. Whenever there is any kind of error or mistake by a big corporation, more often than not, its immediately covered up, nothing is admitted publicly. But when a lawsuit is involved, the discovery process will lead to the facts being uncovered, including what the company knows.
I am glad that they were able to uncover this. Someone I know lived in an apartment complex that was made uninhabitable due to an obvious fault on the owner, but they didn't get a straight answer about what happened until they sued the owner and got the details in discovery, a couple of years after the incident. This is the only way to get to the facts.
What was deleted was the compiled file sent to Tesla, not all the bits of data that it came from. Nothing malicious in that, just code not leaving trash around.
There aren't enough details in the somewhat hyperbolic narrative format to really say, but if I were going to create a temporary archive of files on an embedded system for diagnostic upload, I would also delete it, because that's the nature of temporary files and nobody likes ENOSPACE. If their system had deleted the inputs of the archive that would seem nefarious, but this doesn't, at first scan.
The main reasons to store data are for safety and legal purposes first, diagnostics second. Collision data are all three. They need to be prioritized above virtually everything else on the system and if your vehicle has had so many collisions that the filesystem is filled up, that's a justifiable reason to have a service visit to delete the old ones.
If I were implementing such a system (and I have), I could see myself deleting the temporary without much thought. I would still have built a way to recreate the contents of the tarball after the fact (it's been a requirement from legal every time I've scoped such a system). Tesla not only failed to do that, but avoided disclosing that any such file was transferred in the first place so that the plaintiffs wouldn't know to request it.
The tar file is a convenient transport mechanism for files that presumably exist in original form elsewhere within the system. (All bets off if sources for the tar were changed afterward.)
Given storage is a finite resource, removing the tar after it was confirmed in the bucket is pure waste.
I'm not sure whether you're saying that the tar should or shouldn't be deleted. Regardless, my point isn't that it was intentionally deleted. I can easily imagine someone writing a function to upload the data using something like std::tmpfile (which silently deletes the file when it's closed) without thinking about the full implications of that for the broader context the code exists in.
Even in that case though, you would still have a way to produce the data because it would have been specced in the requirements when you were thinking about the broader organizational context.
When a vehicle crash occurs, that embedded system should no longer be treating data as "temporary", but as what it now is, civil and potentially criminal evidence, and it should be preserved. To go to the effort of creating that data, uploading it to a corporate server, and then having programming that explicitly deletes that data from the source (the embedded system), certainly reads as nefarious without easily verifiable evidence to the contrary. The actions of a company that has acted this way in no fashion lends any credibility to being treated as anything other than a hostile party in court. Any investigators in the future involving a company with such a history need to act swiftly and with the immediate and heavy hand of the court behind them if they expect any degree of success.
I would love to see what you need so much disk space for after the car is crashed and airbags are deployed. If that event fires the car is going in to the shop to have its airbags replaced at a minimum. Adding a service step to clear up /tmp after a crash is fairly straitforward.
"Their system" is a car, sold as a consumer product, which has just experienced a collision removing it indefinitely from normal operation. Reconsider your analysis.
Yes? But the article doesn't say that Tesla deleted the EDR, it says they uploaded the EDR file in an archive format, then deleted the uploaded entity. Which strikes me as totally normal.
No, the car's "black box" is the EDR, the behavior of which is regulated by federal agencies. This article is discussing ephemeral telemetry which accessed the EDR.
No, the EDR forms part of the car's "black box" – just like the FDR forms part of an aeroplane's black box. Per the article, the erased* telemetry ("collision snapshot") contained quite a bit more data than just from the EDR.
*: I can't work out from the article whether this file was erased, or just unlinked from the filesystem: they quote someone as saying the latter, but it looks like it was actually the former.
So? Tesla needs to bear the responsibilities of having deleted its own potentially exculpatory evidence. Not granted an inference that it did so and therefore Tesla is innocent.
I would be fascinated to entertain arguments for how the future write life of a flash memory chip, meant for storing drive-time telemetry in a wrecked car, merits care for preservation.
Fundamentally, flash memory is a bunch of pages. Each page can be read an infinite number of times but there are quite relevant limits on how many times you can write it.
In the simplistic system lets say you have 1000 pages, 999 hold static data and the last one keeps getting a temporary file that is then erased. All wear occurs on page 1000 and it doesn't last very long.
In the better system it notes that page 1000 is accumulating a lot of writes and picks whatever page has the least writes, copies the data from that page to page 1000 and now uses the new page for all those writes. Repeat until everything's worn down. Note the extra write incurred copying the page over.
In the real world a drive with more space on it is less likely to have to resort to copying pages.
I think the goal is to save as much as you can in the interim. Holding onto X bytes of archives is more time worth of data than X bytes of uncompressed. We do that stuff all the time in finance. Stuff gets spewed off to external places and local copies get kept but archived and we simply rotate the oldest stuff out as we go. If the cleanup process is configured separately from the archiving process you can absolutely archive things just to remove them shortly thereafter.
Wear increases when free space is low as there's less overall space to put the data. If you only have 500MB of free space, those blocks take the majority of write hammering until the chip fails. If there's 5000MB free, you can spread the wear.
It seems they waited for a subpoena. Would you prefer automakers send the police a notification anytime the car records a traffic infraction, or maybe they should just set up direct billing for municipalities?
That's obviously problematic. I am only commenting on the belief in a conspiracy of programmers here. The overwhelmingly most likely reason that a temporary file would be unlinked after use is that is what any experienced systems programmer always does as a matter of course.
I've made no contention, but if I had, it would be that whoever signed off on this design had better not have a PE license that they would like to keep, and we as an industry would be wise not to keep counting on our grandfather-clause "fun harmless dorks" cultural exemption now that we manufacture machines which obviously kill people. If that by you is conspiracy theory, you're welcome.
There are not PEs signing software changes for ancillary equipment in consumer vehicles.
ETA: Restate your conspiracy theory in the hypothetical case that they had used `tar | curl` instead of the intermediate archive file. Does it still seem problematic?
"Ancillary" is quite a term for crash reporting after a crash. That is, for a system responding as designed to a collision which disabled the vehicle.
I'm not going to argue with someone who throws gratuitous insults. Rejoin me outside the gutter and we'll continue, if you like. But the answer to your question is trivially yes, that is as professionally indictable, as might by now have been clarified had you sought conversation rather than - well, I suppose, rather than whatever this slander by you was meant to be. One hopes we'll see no more of it.
It's difficult for me to tell in the article because how much the terms are used interchangeably, but it was it FSD or autosteer that was driving the car when it crashed?
My autosteer will gladly drive through red lights, stop signs, etc.
And the fact that we have telemetry at all is pretty amazing. Most car crashes there's zero telemetry. Tesla is the exception, even though they did the wrong thing here.
If this 3-year-lie parade is what Tesla Inc does for killing ONE PERSON, imagine the lengths it will go to concealing MASS DEATHS it is in any part responsible.
6 months ago it had no chance of being prosecuted. But, if the fallout between Elon and Trump is as bad as it looks from the outside, there might be justice after all.
The longer the farce goes on, the more I think the laggers in the self driving car industry are more trying to wait out regulators than actually get good enough.
That is - gamble GOP alignment leading to to regulatory capture such that the bar is lowered enough that they can declare the cars safe.
Despite what you hear from certain media voices, there are effectively no performance-based regulatory barriers in the US. You can claim any autonomy level you want at any time and aside from the small number of states that have a real permit process that rises above a rubber stamp (literally just California), regulators are reactive to headlines of your system failing, not its actual performance.
Even California's system is lax enough that you can drive a Tesla semi through it.
This is why Waymo will win. They've focused on transparency and building trust. They understand they are operating in the most physically dangerous space most people ever encounter in their day-to-day lives. That, and their technology actually works.
Tesla is comparatively a bull in a china shop. Raise your hand if you would trust Tesla over Waymo to autonomously drive your young children for 1,000 miles around a busy metro. That's what I thought.
I think it's wild that a legit article would use an image like that. Sure it's funny, but save it for social media, not a news source that's supposed to be based on facts.
I get this may be off topic, but does anyone think these cheesy, bad AI-generated headline images help the article's point of view, or heck even make it more engaging?
It just looks stupid to me in a way that makes me more likely to discount your post.
This one is a such a bad photoshop too! The box's text is clearly AI generated, with an older model, and the "Autopilot crash data" is imposed on it with an image editing tool. Really cheap looking.
I haven't done it, so I don't have any data to back it up. I suppose it works, at least short term, that is why so many websites, video creators, copywriters, email newsletter writers etc use it?
Negative, cheesy, clickbait, rage inducing etc headlines do seem to get more clicks. There is a reason why politicians spend more time trash talking opponents than talking positively about themselves. Same goes with attack ads.
Journalism? It's literally just a blog—a very successful car-influencer blog that's in the past earned 6-figure payments from Tesla itself[0], for their very successful shilling of Teslas.
Journalism is a thing of its own; blogs aren't it.
[0] https://www.thedrive.com/news/24025/electreks-editor-in-chie... ("Electrek’s Editor-in-Chief, Publisher Both Scoring $250,000 Tesla Roadsters for Free by Gaming Referral Program": "What happens to objective coverage when a free six-figure car is at play? Nothing good." (2018))
People like to think they love literature, yet they read tabloids. People like to say they want better information, yet get their news from social media.
I have not doubt a majority of people will say they despise these pictures like YouTube thumbnails, yet the cold numbers tell the opposite.
If you are not familiar with electreck this is basically a nutshell of their business. Tesla bad = clicks. Tesla good = no clicks.
It is stupid and it works, as you can clearly see in this particular example in YC
normally a person would go to jail. But corporations just pay a fine. I think we really need to come up with punishments that are actual deterrents. Like any time a corporation ends up killing someone from negligence there needs to be an action that is equivalent and scaled appropriately.
Send the corporation to jail. That means it cannot conduct business for the same amount of time that we would put a person in jail.
If "corporations are people" then there should be a way to incarcerate and/or execute criminal corporations. And we the people should do it roughly as regularly as we incarcerate/execute actual human criminals.
Corporations don't actually exist, except on paper. Preserve the records, and you can un-execute them if need be. This suggests we should be executing corporations more readily than we execute humans.
The last time that a major corporation was found guilty for criminal behavior (Arthur Andersen in 2002 for Enron related stuff) the company closed immediately. This has led to problems in the audit industry, where was once the Big 8 audit firms has shrunk, due to mergers and AA dissolving, there basically are barely enough firms to independently audit each others books, and it's made the audit market much worse.
The MCI Worldcom fraud, which broke shortly after Enron, might also have doomed AA (they were the auditor for both major frauds of 2002). MCI Worldcom filed for bankruptcy before it could be hit with criminal charges, and the SEC ended up operating MCI-W in the bankruptcy, because the fines were so large and are senior to all other debts, so they outmuscled all of the other creditors in the bankruptcy filings. Which was why they weren't hit with criminal charges- they already belonged to the Government. There hasn't been much stomach for criminal charges against a corporation ever since.
The fact that the Supreme Court has spent the past few decades making white collar crimes much harder to prosecute (including with Arthur Andersen, where they unanimously reversed the conviction in 2005) is another major factor. The Supreme Court has always been terrible, and gets far more respect than it deserves.
No need for anything so drastic, just make the fines a sufficiently large percentage of corporate free cash flow.
Make negligence unprofitable and the profit-optimizers will take care of the rest. Then 80 years later when people get too used to "trustworthy" corporationns we can deregulate everything and repeat the cycle
I’m not knowledgeable of this incident, so let’s assume I accept your argument that it was the drivers fault (seems likely enough).
Are you also arguing that Tesla didn’t withhold data, lie, and misdirect the police in this case, as the article claims? Seems to me that Tesla tried to look as guilty as possible here.
Cruise had to shut down after less than this but, because Elon has political power over regulation now, a Tesla could drive right through a farmers market and they wouldn't have to pause operations even for an afternoon.
For context though, note that this crash occurred because the driver was speeding, using 2019 autopilot (not FSD) on a city street (where it wasn't designed to be used), bending down to pick up a phone he dropped on the floor, and had his foot on the gas overriding the automatic braking: https://electrek.co/2025/08/01/tesla-tsla-is-found-liable-in... The crash itself was certainly not Tesla's fault, so I'm not sure why they were stonewalling. I think there's a good chance this was just plain old incompetence, not malice.
Letting people use autopilot in unsafe conditions is contributory negligence. Given their marketing, that's more than worth 33% of the fault.
That they hid this data tells me everything I need to know about their approach to safety. Although nothing really new considering how publicly deceitful Musk is about his fancy cruise-control.
The meme of Hanlon's Razor needs to die. Incompetence from a position of power is malice, period.
A bit more nuanced version is that incompetence from a position of power is a choice.
I guess you could go even more nuanced and say sometimes incompetence from a position of power is a choice, and I would agree with that, but now the statement seems so watered down as to be almost meaningless.
A corporation can hire people and put processes in place to arbitrarily minimize (or not) the chance of an mistake in areas that matter to them. In this case, they did just that; only the thing being optimized for was “not giving data to the authorities”.
The evidence of this trial does not support an “oopsie poopsie we messed up so sowwy” interpretation of events. Tesla’s paid representatives went out of their way—repeatedly—to lie, mislead, and withhold evidence in order to avoid scrutiny. Fuck them and everyone involved with that.
This is both because such incompetence costs people's lives, and because they have enough money that they could definitely hire more or better people to re-check and add redundant safety features into their products.
The problem is, they do not want any accountability for claiming that their cars are "self-driving", or for any of their other errors or willful endangerment of the public.
> Update: Tesla’s lawyers sent us the following comment about the verdict:
> Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.
---
Personally, I don't understand how people can possibly be happy with such verdicts.
Recently in 2025, DJI got rid of their geofences as well, because it's the operator's responsibility to control their equipment. IIRC, DJI did have support of the FAA in their actions of removing the geofencing limitations. With FAA expressly confirming that geofencing is not mandated.
These sorts of verdicts that blame the manufacturer for operator errors, are exactly why we can't have nice things.
It's why we get WiFi and 5G radios, and boot loaders, that are binary-locked, with no source code availability, and which cannot be used with BSD or Linux easily, and why it's not possible to override anything anywhere anymore.
Even as a pedestrian, I'm glad that Tesla is fighting the good fight here. Because next thing I know, these courts will cause the phone manufacturers to disable your phone if you're walking next to a highway.
Perhaps, but does it hurt more or less than getting life-changing injuries and your partner killed by a Tesla?
This is especially the case for something that was in its infancy back in 2019 when this crash happened.
And you know what we have in 2025 because of those restrictions being enforced since then?
In 2025, Tesla's nag drivers so much, for not paying attention to the road, that drivers no longer keep the much safer versions of autopilot engaged at all, when looking for their phones.
Instead, now, because the issue is "fixed", Tesla drivers simply do the same thing what drivers of any other car do in the situation.
They disable autopilot first, and only then stop paying attention to the road, looking for their phone.
How's that safer?
We're precisely less safe because of these regulatory requirements.
(And, add insult to injury, this court is now using the hindsight 20/20, of these warnings subsequently being implemented, as evidence of Tesla's wrongdoing in 2019, at a time before anything like that was thought to be possible? Even though, now that these warnings were implemented, we already have evidence that these nags themselves make everybody less safe, since autopilot is simply turned off when you need to stop paying attention to the road?)
Or are you talking about self-published numbers by the company that is proven to withhold, lie, and misdirect in even official police investigations, subpoenas and trials, let alone when it is not actively illegal to do so?
Are we talking numbers with a degree of scientific rigor unfit for publication in a middle school science fair, let alone the minimum standard of scientifically rigorous that members of their team had to achieve to get their degrees, yet somehow fail to do when detailing systems that are literally responsible for the life and death of humans?
Most people are actually very dismissive of autopilot, and are completely misinformed of the benefits / drawbacks / differences of "FSD" versus "Autopilot".
Most are completely unaware of the improvements of v12 or v13, or differences between HW3 or HW4, or which one they have, or that "autopilot" is free, or circumstances under which autopilot can be used etc.
I talked to some guy last year who was actually paying $199/mo for FSD v12, before the price drop to $99/mo, and swearing how great it was, yet he has never tried the parking feature, even though it's been released several months prior. He's a software engineer. That's just one example.
So, if anything, Tesla's marketing is nowhere near as successful as these naysayers would make you believe. Because the vast majority of Tesla's customers are actually far behind on autopilot or FSD buy-in, and are NOT aware of the progress.
Normal consumers don't understand the difference between "Autopilot" and "FSD".
FSD will stop at intersections/lights etc - Autopilot is basically just cruise control and should generally only be used on highways.
They're activated in the same manner (FSD replaces Autopilot if you pay for the upgrade or $99/month subscription), and again for "normal" consumers it's not always entirely clear.
A friend of mine rented a Tesla recently and was in for a surprise when the vehicle did not automatically stop at intersections on Autopilot. He said the previous one he rented had FSD enabled, and he didn't understand the difference.
IMO Tesla just needs to phase out 2019 AP entirely and just give everyone some version of FSD (even if it's limited), or geofence AP to highways only.
If the driver is 100% liable without autopilot, then they should be held 100% liable with autopilot.
The law should be clear and unambiguous in this regard until we remove the steering wheel entirely.
The penalties for being at fault with auto pilot on should be even higher, since it may as well be just as bad as driving while texting!
I do like the idea of incentivizing companies to take all reasonable steps to protect people from shooting themselves in the foot, but what counts as "reasonable" is also pretty subjective, and liability for having a different opinion about what's "reasonable" seems to me to be a little capricious.
For example, the system did have a mechanism for reacting to potential collisions. The vehicle operator overrode it by pushing the gas pedal. But the jury still thinks Tesla is still to blame because they didn't also program an obnoxious alarm to go off in that situation? I suppose that might have been helpful in this particular situation. But exactly how far should they legally have to go in order to not be liable for someone else's stupidity?
The article says that soem government agency demanded Tesla to actually geofense the areas Tesla claims their software is incapable to handle. I am not a Tesla owner and did not read the small fonts manual, do Tesla reserve the rights that they might also not sound the alarm when the car is going at speed straight into an other car while a driver is not having the hands on the wheel? sounds bad, the driver is not steering, the car is driving on an area where it is incapable of driving still and it is heading into a obstacle and the alarm is not sounding (still from the article it seemed like this was a glitch that they were trying to hide, and that this was not supposed to happen)
Anyway Tesla was forced to show the data, they did tried to hide it, so even if fanboys will attempt to put the blame `100% on the driver the jurry and Tesla 's actions tell us that the software did not function as adevertised.
It also didn't really seem to impact Tesla's decision to keep pushing Full Self Driving and Robotaxi despite it having obvious severe flaws (because Tesla sees this rollout as something holding up its stock price).
Like, if they have a chance, they are going to keep $300M.
That is obvious.
OP's point is - it is not having any impact on Tesla's reckless decisions.
Stock price is #1 on the list of their concerns. FCF is somewhere much lower in the list.
Autopilot is cruise control. When you understand this, claiming that Tesla is partially at fault here does not match the existing expectations of other driver assistance tech. Just because Tesla has the capability of disabling it doesn't mean they have to.
This all comes down to an interpretation of marketing speak. If you believe "autopilot" is misleading you'd agree with the jury here, if you don't you wouldn't. I'm no lawyer, and don't know the full scope of requirements for autopilot like features, but it seems that Tesla is subject to unfair treatment here given the amount of warnings you have to completely ignore and take no responsibility for. I've never seen such clear warnings on any other car with similar capabilities. I can't help but think there's maybe some politically driven bias here and I say that as a liberal.
Happy to be convinced otherwise. I do drive a Tesla, so there's that.
And that's exactly why the law is supposed to have a Reasonable Person Standard.
https://en.wikipedia.org/wiki/Reasonable_person
When the majority of Tesla's owners are completely unaware of the viability of autopilot even in 2025, how exactly does it make any sense to blame the marketing when someone was so entrusting in the unproven technology back in 2019? Especially given so many reports of so many people being saved by said technology in other circumstances?
I imagine these things will get better when courts would not be able to find jurors that are unfamiliar with the attention-monitoring nags that Tesla's are famous for.
In other words, if you bought the car because you kept hearing the company say "this thing drives itself", you're probably going to believe that over the same company putting a "keep your eyes on the road" popup on the screen.
Of course other companies have warnings that people ignore, but they don't have extremely successful marketing campaigns that encourage people to ignore those warnings. That's the difference here.
"Auto Pilot: a device for keeping an aircraft or other vehicle on a set course without the intervention of the pilot."
"Cruise Control: an electronic device in a motor vehicle that can be switched on to maintain a selected constant speed without the use of the accelerator."
That is definitely what auto pilot means in the aeronautical and maritime sphere.
But a lot of the general public has a murky understanding of how an auto pilot on a ship or a plane works. So for a lot, probably the majority of them. They will look at the meaning of those two words and land on that auto pilot, means automatic pilot. Which basically ends up beeing self driving.
Sure in a perfect world, they would look up what the term means in the sphere they do not know, and use it correctly, but that is not the world we live in. We do not get the general public, we want, but we have to live with the one we got.
That is not how it’s marketed at all.
Would Boeing or John Deere be responsible for marketing language or just the instruction manual. We know the latter is true. It's there any evidence of the former? Intuitively I would say it's unlikely we'd blame Boeing if a pilot was mislead by marketing materials. Maybe that has happened but I haven't found anything of that sort (please share if aware).
Actually, the former is true. Courts and juries have repeatedly held that companies can be held responsible for marketing language. They are also responsible for the contents of their instruction manual. If there are inconsistencies with the marketing language it will be held against the company because users aren't expected to be able to reconcile the inconsistencies; that's the company's job. Thus, it's irrelevant that the small print in the instruction manual says something completely different from what all the marketing (and the CEO himself) says.
The "autopilot is limited" argument would have worked 20 years ago. It doesn't today. Modern autopilots are capable of maintaining speed, heading, takeoff, and landing so they're not just pilot assistance. They're literally fully capable of handling the flight from start to finish. Thus, the constant refrain that "autopilot in cars is just like autopilot in planes" actually supports the case against Tesla.
the Center for Science in the Public Interest filed a class-action lawsuit
The suit alleges that the marketing of the drink as a "healthful alternative" to soda is deceptive and in violation of Food and Drug Administration guidelines.
Coca-Cola dismissed the allegations as "ridiculous," on the grounds that "no consumer could reasonably be misled into thinking Vitaminwater was a healthy beverage"
One, you don't need a license to buy a non alcoholic beverage. Two, while the FDA has clear guidelines around marketing and labeling, I'm not aware of any regulatory body having clear guidelines around driver assistance marketing. If they did it wouldn't be controversial.
The problem is for several years they actively targeted a customer base incapable of understanding the limitations of the mis-named system they advertised. (Those customers capable of understanding it were more likely to buy vehicles from brands who advertised more honestly.) While the current approach of targeting Nazi and friend-of-Nazi customers might eventually change the story (with its own risks and downsides, one imagines), for the time being it seems reasonable that Tesla bear some responsibility for the unsafe customer confusion they actively courted.
The article says no warnings were issued before the crash.
So which warning did the driver miss?
This is the responsibility of a licensed driver. I don't know how a Mercedes works, but if I crash one because I misused a feature clearly outlined in their user manual, Mercedes is not at fault for my negligence.
Drivers need to be paying attention, but is it not possible that Tesla could also do more to make things clear?
Based on (almost?) all prior cases though.
And it didn't warn users about this lack of capabilities until it was forced to do so. Those warnings you're talking about were added after this accident occurred as part of a mandated recall during the Biden administration.
Lol is this for real? No amount of warnings can waive away their gross negligence. Also, the warnings are clearly completely meaningless because they result in nothing changing if they are ignored.
> Autopilot is cruise control
You're pointing to "warnings" while simultaneously saying this? Seems a bit lacking in self awareness to think that a warning should muster the day, but calling cruise control "autopilot" is somehow irrelevant?
> I can't help but think there's maybe some politically driven bias here
Look only to yourself, Tesla driver.
That’s not true
> Do I still need to pay attention while using Autopilot?
> … Before enabling Autopilot, you must agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your vehicle.” Once engaged, Autopilot will also deliver an escalating series of visual and audio warnings, reminding you to place your hands on the wheel if insufficient torque is applied. If you repeatedly ignore these warnings, you will be locked out from using Autopilot during that trip.
> If you repeatedly ignore the inattentive driver warnings, Autosteer will be disengaged for that trip. If you receive several ‘Forced Autopilot Disengagements’ (three times for vehicles without a cabin camera and five times for vehicles with a cabin camera), Autosteer and all features that use Autosteer will be temporarily removed for approximately one week.
https://www.tesla.com/en_gb/support/autopilot
>> If you repeatedly ignore the inattentive driver warnings, Autosteer will be disengaged for that trip. If you receive several ‘Forced Autopilot Disengagements’ (three times for vehicles without a cabin camera and five times for vehicles with a cabin camera), Autosteer and all features that use Autosteer will be temporarily removed for approximately one week.
There are videos of people on autopilot without their hands on the wheel...
I don’t follow what you mean here? Are you confusing me with someone else?
> There are videos of people on autopilot without their hands on the wheel...
You can definitely remove your hands momentarily. I’ve seen people apply a weight to the steering wheel to fool it too. Not sure how people defeating the safety features would be Tesla’s fault.
What part of how autopilot is marketed do you find to be gross negligence?
I would ask, what is the existing definition of autopilot as defined by the FAA? Who is responsible when autopilot fails? That's the prior art here.
Additionally if NTSB failed to clearly define such definitions and allowments for marketing, is that the fault of Tesla or the governing body?
I'm pretty neurotic about vehicle safety and I still don't think this clearly points to Tesla as being in the wrong with how they market these features. At best it's subjective.
I've owned two Tesla's ( now a Rivian/Porsche EV owner). Hands down Tesla has the best cruise control technology in the market. There-in lies the problem. Musk constantly markets this as self driving. It is NOT. Not yet at least. His mouth is way way way ahead of his tech.
Heck, stopping for a red light is a "feature", where the car is perfectly capable of recognizing and doing so. This alone should warrant an investigation and one that i completely, as a highly technical user, fell for when i first got my model 7 delivered... Ran thru a red light trying out auto pilot for the first time.
I'm honestly surprised there are not more of these lawsuits. I think there's a misinterpretation of the law by those defending Tesla. The system has a lot of legalese safe-guards and warnings. But the MARKETING is off. WAY OFF. and yes, users listen to marketing first.
and that ABSOLUTELY counts in a court of law. You folks would also complain around obtuse EULA, and while this isn't completely apples to apples here, Tesla absolutely engages in dangerous marketing speak around "auto pilot". Eliciting a level of trust for drives that isn't there, and they should not be encouraging.
So sorry, this isn't a political thing ( and yes, disclaimer, also a liberal).
Signed... former Tesla owner waiting for "right around the corner" self driving since 2019...
Are there clear guidelines set for labeling and marketing of these features? If not, I'm not sure how you can argue such. If it was so clearly wrong it should have been outlined by regulation, no?
> You should put yourself in the family’s shoes. If your daughter died in a car crash, you’d want to know exactly what happened, identify all contributing factors, and try to eliminate them to give some meaning to this tragic loss and prevent it from happening to someone else.
> It’s an entirely normal human reaction. And to make this happen in the US, you must go through the courts.
This (especially the very last point) is crucial. Whenever there is any kind of error or mistake by a big corporation, more often than not, its immediately covered up, nothing is admitted publicly. But when a lawsuit is involved, the discovery process will lead to the facts being uncovered, including what the company knows.
I am glad that they were able to uncover this. Someone I know lived in an apartment complex that was made uninhabitable due to an obvious fault on the owner, but they didn't get a straight answer about what happened until they sued the owner and got the details in discovery, a couple of years after the incident. This is the only way to get to the facts.
If I were implementing such a system (and I have), I could see myself deleting the temporary without much thought. I would still have built a way to recreate the contents of the tarball after the fact (it's been a requirement from legal every time I've scoped such a system). Tesla not only failed to do that, but avoided disclosing that any such file was transferred in the first place so that the plaintiffs wouldn't know to request it.
Given storage is a finite resource, removing the tar after it was confirmed in the bucket is pure waste.
Even in that case though, you would still have a way to produce the data because it would have been specced in the requirements when you were thinking about the broader organizational context.
*: I can't work out from the article whether this file was erased, or just unlinked from the filesystem: they quote someone as saying the latter, but it looks like it was actually the former.
Fundamentally, flash memory is a bunch of pages. Each page can be read an infinite number of times but there are quite relevant limits on how many times you can write it.
In the simplistic system lets say you have 1000 pages, 999 hold static data and the last one keeps getting a temporary file that is then erased. All wear occurs on page 1000 and it doesn't last very long.
In the better system it notes that page 1000 is accumulating a lot of writes and picks whatever page has the least writes, copies the data from that page to page 1000 and now uses the new page for all those writes. Repeat until everything's worn down. Note the extra write incurred copying the page over.
In the real world a drive with more space on it is less likely to have to resort to copying pages.
I've made no contention, but if I had, it would be that whoever signed off on this design had better not have a PE license that they would like to keep, and we as an industry would be wise not to keep counting on our grandfather-clause "fun harmless dorks" cultural exemption now that we manufacture machines which obviously kill people. If that by you is conspiracy theory, you're welcome.
ETA: Restate your conspiracy theory in the hypothetical case that they had used `tar | curl` instead of the intermediate archive file. Does it still seem problematic?
I'm not going to argue with someone who throws gratuitous insults. Rejoin me outside the gutter and we'll continue, if you like. But the answer to your question is trivially yes, that is as professionally indictable, as might by now have been clarified had you sought conversation rather than - well, I suppose, rather than whatever this slander by you was meant to be. One hopes we'll see no more of it.
My autosteer will gladly drive through red lights, stop signs, etc.
And the fact that we have telemetry at all is pretty amazing. Most car crashes there's zero telemetry. Tesla is the exception, even though they did the wrong thing here.
So this is also a failure of the investigator.
Tesla deserves regulating.
That is - gamble GOP alignment leading to to regulatory capture such that the bar is lowered enough that they can declare the cars safe.
Even California's system is lax enough that you can drive a Tesla semi through it.
Tesla is comparatively a bull in a china shop. Raise your hand if you would trust Tesla over Waymo to autonomously drive your young children for 1,000 miles around a busy metro. That's what I thought.
It just looks stupid to me in a way that makes me more likely to discount your post.
Negative, cheesy, clickbait, rage inducing etc headlines do seem to get more clicks. There is a reason why politicians spend more time trash talking opponents than talking positively about themselves. Same goes with attack ads.
For an article that is supposed to at least smell like journalism, it looks so trashy.
Journalism is a thing of its own; blogs aren't it.
[0] https://www.thedrive.com/news/24025/electreks-editor-in-chie... ("Electrek’s Editor-in-Chief, Publisher Both Scoring $250,000 Tesla Roadsters for Free by Gaming Referral Program": "What happens to objective coverage when a free six-figure car is at play? Nothing good." (2018))
I have not doubt a majority of people will say they despise these pictures like YouTube thumbnails, yet the cold numbers tell the opposite.
Tesla must pay portion of $329M damages after fatal Autopilot crash, jury says
https://news.ycombinator.com/item?id=44760573
So the solution to all of this is to lock up more executives who commit fraud or lie to the public.
Recently (after a 10 year battle) two former Volkswagen executives just got prison time for the Dieselgate scandal.
Wish it had come faster, but that's a good start.
https://www.lbc.co.uk/crime/volkswagen-execs-jailed-fraud-de...
Send the corporation to jail. That means it cannot conduct business for the same amount of time that we would put a person in jail.
The MCI Worldcom fraud, which broke shortly after Enron, might also have doomed AA (they were the auditor for both major frauds of 2002). MCI Worldcom filed for bankruptcy before it could be hit with criminal charges, and the SEC ended up operating MCI-W in the bankruptcy, because the fines were so large and are senior to all other debts, so they outmuscled all of the other creditors in the bankruptcy filings. Which was why they weren't hit with criminal charges- they already belonged to the Government. There hasn't been much stomach for criminal charges against a corporation ever since.
The fact that the Supreme Court has spent the past few decades making white collar crimes much harder to prosecute (including with Arthur Andersen, where they unanimously reversed the conviction in 2005) is another major factor. The Supreme Court has always been terrible, and gets far more respect than it deserves.
Make negligence unprofitable and the profit-optimizers will take care of the rest. Then 80 years later when people get too used to "trustworthy" corporationns we can deregulate everything and repeat the cycle
If there's actually a case to be made here, you would think we'd have a source that didn't read like there should be fleck of spittle with it.
Buying SPY, my mistake. Being incentivized to put money in my 401k... That is a bit harder to solve.
Are you also arguing that Tesla didn’t withhold data, lie, and misdirect the police in this case, as the article claims? Seems to me that Tesla tried to look as guilty as possible here.
I agree with you that doesn’t matter when it comes to covering up/lying about evidence.
They could have been 0.5% at fault. Doesn’t mean that was ok.
Cruise had to shut down after less than this but, because Elon has political power over regulation now, a Tesla could drive right through a farmers market and they wouldn't have to pause operations even for an afternoon.
Does he still? I wouldn't be so sure.