While I understand the motivations for the investigation, and fully agree its an investigation worth conducting.
I dont think it actually has merit, at least not in the terms framed by the article.
In most circumstances it "probably is better than a human driver", not all circumstances, and not proven (hence probably).
But if LM can push the F35 as the best jet ever while it racks up a wreckage count to make the 737 Max jealous, claims made by Tesla arent even in the same ballpark....
It would be a real shame if we lose any chance of full self driving over a misunderstanding about the system being "not perfected yet"
I know it's on top of mind when I'm commuting in my F35. /s
Comparing a military jet which operation requires thousands of hours of training just to get off the ground compared to a mass produced vehicle is not a valid comparison, at least in my mind. Plus what LM pushes to its stockholders is definitely not as comprehensive as the Pentagon gets.
> It would be a real shame if we lose any chance of full self driving over a misunderstanding about the system being "not perfected yet"
It would be. But there's no misunderstanding here. Tesla continues to misrepresent their self driving capabilities in marketing materials, after years of feedback that tech hasn't caught up to the promises. If Tesla's profit-seeking dishonesty were to undermine the entire industry, that would be a shame.
"It would be a real shame if this snake oil were banned before we had a chance to figure out what it can cure" might be a better framing.
Can you cite an example of an obviously misleading claim in Tesla marketing materials? People keep making these claims as if they are inherent and obvious truths. From my browsing of the Tesla site it does not seem misleading at all.
> In most circumstances it "probably is better than a human driver", not all circumstances, and not proven (hence probably).
That's the core issue here, really. Is autopilot more or less safe than a human driver? If it is, then it's hard to see how there's any criminal liability here. (And "fraud" judgements over "the car doesn't really drive itself" would be limited to a refunded purchase price on vehicles that sell used higher than their sticker price).
And... is it less safe? Tesla publishes their own data saying not only is it safe, it's MUCH safer. Every time the subject comes up, people show up to whatabout and explain how that's not a good data set. But does anyone have a better one? Does the DoJ?
I was making this point last year when there were half as many Teslas on the roads: there are millions of these cars now, and every one has autopilot[1]. Any notable safety data would be glowing like hot plutonium if it existed. It probably doesn't exist. Teslas seem to be safe; they're almost certainly safer than the median vehicle.
[1] The headline obscures it, but the text makes clear the investigation is about the Autopilot product, not anything branded FSD.
Exactly why I think the investigation is justified even though I believe they are safer than cars without Teslas autopilot.
There is a risk Tesla is hiding data, if they are the investigation should surely find it - if not then those hating on the autopilot can cry into their caramel lattes while we move step by step closer to being able to nap in the backseat while travelling alone cross country.
Is there? That's a weird frame to argue in. There are existing regulations on accident statistics and reporting. Tesla does that. That they also release data split out by autopilot usage certainly can't be construed as "hiding" anything, given that (AFAIK) other manufacturers have varying levels of assisted driving technology too and don't do any public reporting at all.
There's always a risk. Of course it is possible people in Tesla have engaged in criminal behaviour. I dont think its a particularly high risk tho.
But others do, it is possible the full story isnt being told. In fact I would go as far as saying we can be sure the full story isnt being told for the simple fact that they have some evidence to base an investigation on - what is just not given in the article.
> It would be a real shame if we lose any chance of full self driving over a misunderstanding about the system being "not perfected yet"
We can still have self driving cars, but they should be developed within a culture that values safety. Tesla is not such a culture. We know this because after the first accident that resulted in decapitation, Tesla collectively shrugged and made the problem worse by removing sensors, which predictably resulted in a second decapitation. They collectively shrugged after that one as well, and again made the problem worse by removing more sensors.
Tesla does not value safety, and their YOLO attitude toward driverless cars, in which the general public is forced to participate in their beta test whether we like it or not, is holding the driverless car industry back. They are not friends of the cause, and the sooner they are prevented from running beta tests on the general public (which have caused deaths), the sooner the industry as a whole can move forward. Reckless engineering by Tesla will not result in a net gain in safety for everyone. Safety is hard even when done intentionally, it won't be achieved as a second order effect of Tesla's "move fast and break things" ethos.
3 years ago.
Its entirely capable of negotiating public roads with no user input.
Absolutely nowhere has Tesla said (as far as I have seen anyway) "Teslas can full self drive with 100% safety on public roads".
But hey, the hate on Elon Musk regardless of the facts crew is out in force. Personally I think they are worse than the believe everything Musk thinks will be ready next year crew, but neither are worth the downvotes.
This is a weird framing. Are Teslas unsafe? Either they are or they aren't, right? Are other cultures that "value" safety producing safer cars? If they're not, does that say anything about the value of "values"? What's the goal here, values or safety?
It was an expression. Certainly you agree it's quantifiable, right? (Unlike "values"). Questions of the form "are accidents, as defined this way, blah blah blah blah, more or less likely likely to occur in a Tesla than in a member of this other suitably defined vehicle cohort, blah blah blah" ... are answerable in a binary fashion. Right?
What they’re doing by removing different types of sensor is -simplifying- the Tesla system design and bringing it closer to human senses (ie eyesight alone).
Apparently Hacker News thinks humans are safer than Autopilot. So why wouldn’t we advocate a highly advanced vision-based model in cars, rather than a complex, awkwardly synchronised fusion of different classes of sensor?
Take LiDAR, for example. Some claim it’s superior to Tesla’s vision sensors. But LiDAR can’t detect colour, so how will it read traffic lights? Its model of the world will have to be synced up to a camera vision-based model of the world. Syncing two 3D (4D in fact) models precisely is a pretty tough problem to solve. Complexity becomes a risk in its own right.
Eyesight is a combination of the sensory organ and the processing that it feeds into. That is, it includes your brain. Unfortunately Tesla's "brain" is vastly more stupid than most adult humans. Extra sensory organs are a quite reasonable way to compensate for reduced cognitive capability. So, this idea of simplification seems dishonest to me.
I dont think it actually has merit, at least not in the terms framed by the article.
In most circumstances it "probably is better than a human driver", not all circumstances, and not proven (hence probably).
But if LM can push the F35 as the best jet ever while it racks up a wreckage count to make the 737 Max jealous, claims made by Tesla arent even in the same ballpark....
It would be a real shame if we lose any chance of full self driving over a misunderstanding about the system being "not perfected yet"