> We're approaching the "deadly valley" - automatic driving that's almost good enough that the driver can stop paying attention.
This sounds to me like the "technology right now is crossing a distinct line into doom" fallacy. It seems like the same could have been said when automobiles started being operated by average folks rather than professional chauffeurs, when the synchromesh was invented, automatic transmissions, antilock brakes, cruise control, adaptive cruise control, and so on.
We've seen this happen with airplanes. Where the autopilots result in the human pilots no longer being fully aware of their environment and context in that environment. So, when the autopilot alarms, the human pilot doesn't have the context of the problem and often makes a naive mistake as a result (a naive mistake that has fatal consequences).
There was a good article discussing this recently on here, a quick search should find it for more details.
So, I agree with the parent that semi-auto pilot like features are probably quite risky when intervention is required. At least current generations, but I think we can also make better designs that address these concerns.
We've seen airplane crashes because of pilots' misunderstanding of or ignoring autopilot features. But have we seen air travel become more dangerous on net because of autopilot features? I highly doubt it.
This sounds to me like the "technology right now is crossing a distinct line into doom" fallacy. It seems like the same could have been said when automobiles started being operated by average folks rather than professional chauffeurs, when the synchromesh was invented, automatic transmissions, antilock brakes, cruise control, adaptive cruise control, and so on.