I'm talking about iptables rate limiting (on Linux and I assume other OS's firewalls can implement this). Fixing bugs in isolated code is part of the scope we face but preventing the business and its customers from suffering due to a bug is also part of we get paid for. If you are still in school or work in the scientific area then perhaps you have not come across rate limiting?
Fixing the leak today is admirable and difficult but challenging as the many comments have shown. The problem is that at any time regressions can and will happen. There was a problem on Ubuntu with SSH a couple of years ago, SSH had been fine then someone made a change (I don't recall the details) that went unnoticed for I think it was 2 or 3 years. That change made SSH vulnerable to I believe it was timing attacks and it could have been easily prevented. This was in SSH. SSH.
No, the change you're thinking of is when Debian's maintainers managed to comment out the "secure" part of it's cryptographic secure random number generator, thus ensuring that SSH would only generate keys from a trivially small range of possible values. That change had nothing to do with the fundamental difficulty of generating random numbers.
The fix for not leaking timing from your comparison is trivial. Either double-hash, or use a timing-independent comparison function like the accumulator XOR function upthread.
It does nobody any favors to spread drama and FUD over what is in fact a simple and easily fixable problem.
Fixing the leak today is admirable and difficult but challenging as the many comments have shown. The problem is that at any time regressions can and will happen. There was a problem on Ubuntu with SSH a couple of years ago, SSH had been fine then someone made a change (I don't recall the details) that went unnoticed for I think it was 2 or 3 years. That change made SSH vulnerable to I believe it was timing attacks and it could have been easily prevented. This was in SSH. SSH.