"In order to prove correctness you have to prove that the procedure gives the error bounds you claim"
You don't get to just add your own requirement for correctness and then force people to prove it?
They claim a specific thing - they prove that thing.
That thing suffices to prove that it succeeds at matrix multiplication. You for some reason really just don't like that as far as i can tell, and argue it doesn't suffice for usefulness (which i agree on)
Matrix multiplication, and "essentially any other numerical algorithm", is not defined in terms of the error bounds for correctness. That is just BS. The error bounds depend on implementation factors, and as such, they are totally unrelated to correctness.
I have read the entire definition, nowhere does it refer to error bounds as a requirement for successful matrix multiplication!
The word "error" does not even appear on the page
Since it's wikipedia, I also pulled out my college math books. Same thing.
They prove correctness without any reference to error bounds. Those are accepted proofs.
I don't see a single basic proof that has error bounds as part of correctness.
It, again, wouldn't make any sense, because error bounds depend on implementation factors.
So again, you simply can't add your requirement to correctness just because you like it. They still remain where they should be - usefulness for application.
You seem to be confusing the mathematical definition of matrix multiplication with an explicit algorithm to compute it. The wiki page you linked is almost entirely about the abstract mathematical definition and talks about algorithms only for a single paragraph, which is about computational complexity. If you look at the wiki page for a particular algorithm (e.g. Strassen: https://en.wikipedia.org/wiki/Strassen_algorithm) then of course they talk about stability in comparison to the naive algorithm (although the page could be improved quite a bit).
If your college textbooks do not mention error analysis, conditioning and stability then they are not numerical linear algebra books worthy of the name. Check out a reference like Trefethen and Bau's Numerical Linear Algebra for example. This book has a whole part (out of the 7 parts in the book) talking about conditioning and stability, and these ideas are present throughout other parts as well.
Once more the type of analysis I'm talking about is emphatically not implementation dependent. It is a property of the algorithm itself. For an example of the sort of statement I mean check out theorem 3.1 of this paper: https://arxiv.org/abs/math/0603207. If you disagree with me then please indicate what sort of "implementation factors" appear in the statement of the theorem.
Correctness of a mathematical algorithm is of course defined by whether it meets a mathematical definition. In this case, correctness of matrix multiplication is only defined by whether it meets the mathematical definition of matrix multiplication. That's it.
That's the whole thing.
Correctness of all computer science algorithms is defined by whether they meet a particular algorithmic specification.
That's literally the definition of correctness:
"In theoretical computer science, an algorithm is correct with respect to a specification if it behaves as specified"
The specification is the matrix multiplication definition given right on that Wikipedia page. this algorithm meets it.
It is therefore correct.
(again, "error" does not appear on the page here either).
There is no separate, special definition for "correctness (mathematical)" or "correctness (eigenket)". You really seem to want to there to be one, but it ain't there.
You really really really don't want to let this go, but the problem is - nothing, anywhere, agrees with you that correctness and usefulness are the same thing. Nor can you cite any reference, like i just did, to correctness that requires it do anything other than meet the specified mathematical definition. Your papers don't do it, my books don't do it. Nothing does it. Because it's not a thing.
My college textbooks talk about error analysis. I did not claim otherwise. They talk about it in the context of how to make an algorithm useful for a particular purpose, not about correctness. They are not making a ridiculous claim like you are.
I'm not going down this path anymore with you. Believe what you want, the rest of us will continue to not confuse it, and sources that people look at (textbooks, wikipedia, etc)
will continue to not lead them astray.
I can only hope that at some point, you too stop trying to do so.
If you look up numerical algorithm in Wikiepedia you will find plenty of discussion of error bounds. Of course matrix multiplication performed on exact numbers doesn't have errors, the calculation is exact. Computers and the algorithms they run do not have that luxury when dealing with approximations.
Of course you will, because those are about how they are implemented on computers with limited precision.
Your point is exactly mine - correctness is usually defined on exact numbers, which does not have error bounds.
Usefulness is defined by particular implementation choices and precision choices when implemented on a particular computer.
The entire argument here is (crazily) that you can't prove correctness without error bounds, correctness is always context specifi.
Of course you can, and of course it's not.
Just like the wikipedia algorithm shows.
That may or may not make it useful for a particular application.
You don't get to just add your own requirement for correctness and then force people to prove it?
They claim a specific thing - they prove that thing. That thing suffices to prove that it succeeds at matrix multiplication. You for some reason really just don't like that as far as i can tell, and argue it doesn't suffice for usefulness (which i agree on)
Matrix multiplication, and "essentially any other numerical algorithm", is not defined in terms of the error bounds for correctness. That is just BS. The error bounds depend on implementation factors, and as such, they are totally unrelated to correctness.
Let's take a look: https://en.wikipedia.org/wiki/Matrix_multiplication
I have read the entire definition, nowhere does it refer to error bounds as a requirement for successful matrix multiplication!
The word "error" does not even appear on the page
Since it's wikipedia, I also pulled out my college math books. Same thing.
They prove correctness without any reference to error bounds. Those are accepted proofs.
I don't see a single basic proof that has error bounds as part of correctness.
It, again, wouldn't make any sense, because error bounds depend on implementation factors.
So again, you simply can't add your requirement to correctness just because you like it. They still remain where they should be - usefulness for application.