I don't see how that can be true. Lack of precision is one thing, lack of stability is very different.
Instability leads to divergence from the true answer, and I would expect it to mean super-linear divergence (though I am not an expert in this) which would quickly destroy any meaningful result (=> chaotic behaviour). But I'm not an expert.
In practice this doesn't happen because numerically unstable NNs tend to have bad loss. A simple way to see this is that instability means that the network is highly sensitive to the inputs, which means that the network will give wildly different results for basically the same input which is wrong. Furthermore, if the weights of your NN are such that you are getting dramatic overflow/underflow that prevents it from correctly predicting, that will have a high loss, and the process of training will move towards parameters that don't have these rounding errors blow up.
Instability leads to divergence from the true answer, and I would expect it to mean super-linear divergence (though I am not an expert in this) which would quickly destroy any meaningful result (=> chaotic behaviour). But I'm not an expert.