> Humans are bad at reasoning about how failures cascade, so we implement bright line rules about when it's safe to deploy.
I think aggregate human intuition is often undervalued. It is the case that every bright line rule has a cost, and the total cost of its adherence must be weighed against the occasional cost of failure to adhere.
But…
> Humans are bad at reasoning about how failures cascade, so we implement bright line rules about when it's safe to deploy.
I think aggregate human intuition is often undervalued. It is the case that every bright line rule has a cost, and the total cost of its adherence must be weighed against the occasional cost of failure to adhere.
Benefits don’t exist in a vacuum.