Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every time you disengage it invites you to leave immediate voice feedback as to why, and presumably they are using all this feedback in conjunction with camera and data feeds from cars that are opted in (which includes all FSD beta cars I believe).

So, they are getting what they need to make it better.



> Every time you disengage it invites you to leave immediate voice feedback as to why

lmao do people do this for free?


You're still driving, so it doesn't take any time. And not only can it lead to bug fixes, it can be cathartic to complain.


Worse, they actually think it matters and that Tesla looks at the reports or uses them for anything.


The problem with Tesla is the lack of LiDAR not training data.

You need to be able to accurately do bounding box detection in order to determine whether that billboard of a person is real or not or if that dog with a hat should be avoided.

Research has conclusively shown that vision only systems simply can't match LiDAR for this task.


Don't humans drive with vision only?


We have the world's most advanced supercomputer behind them.

And we also can move our eyes around in three dimensions to infer depth.


You are right that one problem is possibly humans don't drive with their heads fixed in place, they move their neck and constantly adjust viewing angles.

w.r.t. the super computer point, AI systems have been able to out perform humans for specialized tasks for a while.


Vision feeding into trained general intelligence.


We usually use our ears, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: