Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The NHTSA is tired of Tesla's hand-waving away their safety investigations into Autopilot by pushing stealth updates that fix specific scenarios in specific places being investigated.

Why isn't Tesla prosecuted for that? It's lawless!



No, that’s typical software development. Find a bug in circulation, fix and deploy a fix. There are probably hundreds of internal issues that get fixed per normal protocol, as with any piece of software. Putting out a “recall” or alert for every modification to the code is pointless. What regulators need to do is keep up with the times. They need to have their own safety test suite which manufacturers can test against, and be independently audited


> No, that’s typical software development.

Software that controls multi-thousand pound machines at 70+mph isn't typical, and typical practices don't necessarily apply.


Yes, those practices absolutely shouldn't apply for self driving cars. Good luck regression testing how a system change impacts the AI handling of every edge case of traffic.


Waymo does this. Their infrastructure costs would be astronomical though if not attached to a company with its own cloud


It seems like the test suites for these deep learning based models are themselves almost comprehensive knowledge bases from which you could build a more traditional control software.


> They need to have their own safety test suite which manufacturers can test against

Coaxing regulators into producing a test that can be optimized for is exactly how we got the WW scandal.

> What regulators need to do is keep up with the times.

Keeping up with the times sounds awfully like allowing insane things because some whiz kid believes there's no difference between a car and a website.


It's typical software development when there's nothing at stake (such as human life). When human life is at stake, the controls on software reviews/changes/updates SHOULD be much tighter, but as there's no governing body to require this, it's on the developers themselves to do it right. Tesla is an example of a company that does not employ those controls in mission/life critical software.


Sorry, typical software development is for a national regulator to find some illegal feature in your website, and then you disable the feature for specific IP ranges or geofenced areas where the regulator's office is? No, I don't think it is.


Typical software development as practiced by Uber, perhaps


Yeah, well, just like half of US sites just flatly block Hetzner IPs (where I happen to have a VPN) because GDPR.


They've been accused of silently pushing updates to fix specific scenarios in order to gaslight the regulators.

Imagine an airbag incorrectly deployed sometimes, and the fix was to use a GPS geofence disable the airbag entirely on the test track, but only on days when the regulator was trying to reproduce spurious airbag deployments, not on crash test days.


That sounds like a cartoon villain. Here's an actual example:

Regulators were concerned after a car on non-FSD Autopilot (AKA Auto-Steer + Traffic Aware Cruise Control) hit an emergency vehicle parked half way in the right lane of a highway due to driver inattention. Tesla quickly pushed an update that uses ML to detect emergency lights and slow to a stop until the driver pushes on the accelerator to indicate it is clear to go.

That's not cheating, that's life-saving technology. No other steer assist technology gets (or sometimes is even capable of getting) updates that fast.


> That sounds like a cartoon villain.

Contempt is such an overused tactic, and never meant anything anyway. Plus, it doesn't sound unrealistic to me.


What's to stop them to push an update that turns the workaround off because it leads to unexpected deceleration in odd lighting, or when there is an emergency vehicle on the other side of a divided freeway?

What process is used to make such decisions?


Evidence they are doing anything like this? Or are they fixing the "actual" issue.


Per the article, the regulators are sufficiently concerned that they're blocking all updates to their test vehicles.


[flagged]


Imagine a person that is totally ignorant of the fact that major corporations regularly engage in fraud and is willing to give them all a pass


Imagine a scenario almost identical to simple mixture of one that was ally proven to have happened and confessed by the CEOs of two corporations (VW and Uber).


No, it is not. You sure don't do it in aerospace. Each change is verified, the entire system is validated prior to a release.


> No, that’s typical software development.

It's not typical software development in life-critical systems. If you think it is, you should not be working on life-critical systems.


Releasing software updates in normal is life-critical systems. Can't believe you are arguing differently.


Narrator: (s)he doesn’t


> typical software development.

So if a pharmacy swindles you out of your money or gives you fake drugs, I should reply 'that's just typical drug dealer'


It’s not typical for safety critical systems. A car isn’t a Web 3.0 app and shouldn’t be updated in the same way.


> No, that’s typical software development.

Cars will never be software, much like pacemakers and ICDs won't ever be software


It really is insane. It’s one thing to have flaws, it’s quite another to stealth cover-them-up like it’s a game of hide and seek.


Wut? You want Tesla prosecuted because they are fixing issues over the air?

If the NHTSA think there is a safety issue with Tesla Autopilot they will require Tesla to… fix it. Perhaps remotely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: