Sadly, that’s the worst way to actually design the system. I’d rather have two different technologies working together, with different failure modes. Not using radar (especially in cars that are already equipped) might make economic sense to Tesla, but I’d feel safer if visual processing was used WITH radar as opposed to instead of radar.
I also expect an automated system to be better than the poor human in the drivers seat.
You have to eventually decide to trust one or the other, in real-time. So having multiple failure modes doesn't solve the problem entirely. This is called 'Fusion', meaning you have to fuse information coming from multiple sensors together. There are trade offs because while you gain different views of the environment from different sensors, the fusion becomes more complicated and has to be sorted out in software reliably in real-time.
> There are trade offs because while you gain different views of the environment from different sensors, the fusion becomes more complicated and has to be sorted out in software reliably in real-time.
If you're against having multiple sensors though, the rational conclusion would be to just have one sensor, but Tesla would be the first to tell you that one of the advantages their cars have over human drivers is they have multiple cameras looking at the scene already.
You already have a sensor fusion problem. Certainly more sensors add some complexity to the problem. However, if you have one sensor that is uncertain about what it is seeing, having multiple other sensors, particularly ones with different modalities that might not have problems in the same circumstance, it sure makes it a lot easier to reliably get to a good answer in real-time. Sure, in unique circumstances, you could have increased confusion, but you're far more likely to have increased clarity.
This is one side of the argument. The other side of the argument is that what matters more than the raw sensor data is constructing an accurate representation of the actual 3D environment. So an argument could be made (which is what this guy and Tesla are gambling on and have designed the company around), is that the the construction & training of the Neural out-weighs the importance of the actual sensor inputs. In the sense that even with only two eyes (for example) this is enough when combined with the ability of the brain to infer the actual position and significance of real objects for successful navigation. So as a company with limited R&D & processing bandwidth, you might want to devote more resources to machine learning rather than sensor processing. I personally don't know what the answer is, just saying there is this view.
The whole point of the sensor data is to construct an accurate representation of the actual environment, so yes, if you can do that, you don't need any sensors at all. ;-)
Yes, in machine learning, pruning down to higher signal data is important, but good models are absolutely amazing at extracting meaningful information from noisy and diffuse data; it's highly unusual to find that you want to dismiss a whole domain of sensor data. In the cases where one might do that, it tends to be only AFTER achieving a successful model that you can be confident that is the right choice.
Tesla's goal is self-driving that consumers can afford, and I think in that sense they may well be making the right trade-offs, because a full sensor package would substantially add to the costs of a car. Even if you get it working, most people wouldn't be able to afford it, which means they're no closer to their goal.
However, I think for the rest of the world, the priority is something that is deemed "safe enough", and in that sense, it seems very unlikely (more specifically, we're lacking the tell tale evidence you'd want) that we're at all close to the point where you wouldn't be safer if you had a better sensor package. That means, in effect, they're effective sacrificing lives (both in terms of risk and time) in order to cut costs. Generally when companies do that, it ends in law suits.
> You have to eventually decide to trust one or the other, in real-time.
More or less. You can take that decision on other grounds - e.g. "what would be safest to do if one of them is wrong and i don't know which one?"
The system is not making a choice between two sensors, but determining a way to act given unreliable/contradictory information. If both sensors allow for going to the emergency lane and stopping, maybe that's the best thing to do.
It's far from the worst way, because if humans are visually blinded by the sun or snow or rain they will generally slowdown and expect the cars around them to do the same.
Predictability especially around failure cases is a very important feature. Most human drivers have no idea about the failure modes of lidar/radar.
I also expect an automated system to be better than the poor human in the drivers seat.