So I can't help but wonder. If the Tesla autopilot system has forward looking radar with a 160m range, why wouldn't it notice it was about to drive into something and apply the brakes?
I've read these systems work by assigning probability to each of the sensor inputs and choosing the most likely guess at what's really happening (or something like that). But shouldn't even a low probability of "you're about to crash and die" have more weight than a high probability of "it looks like the road goes this way".
I've read these systems work by assigning probability to each of the sensor inputs and choosing the most likely guess at what's really happening (or something like that). But shouldn't even a low probability of "you're about to crash and die" have more weight than a high probability of "it looks like the road goes this way".