Something that was posted on HN once or twice before was EWD 1036: "On the cruelty of really teaching computer science", by Dijkstra.
It makes, in my opinion, a much more lucid argument than this article as to why a firm background in math, particularly theoretical math, is important for programmers of all sorts. It's definitely something I'd read if you're thinking about the role of math in computer science and programming as fields.
Colin, I'm curious about your take on the essay. I have to admit that while I did read it, I did not engage with it deeply because I felt that the author created strawman views, particularly with regards to the people mentioned and the "Lisp school of programming." Since I found the premises so confused, I didn't have the energy to disentangle the real point from them.
(Part of my curiosity on your take comes from the fact that you commented on a similarly themed essay I wrote some time back.)
I've put it on my list of things to write about. It's a long list, but I hope to knock a few things off it over the holiday. Thanks for asking - I'll post here if/when I get something done.
Looks like you re-posted your own old submission. Nothing wrong with it I guess since people are still interested judging from upvotes. But that would be one of the reasons "the quality of HN threads is going down" as many complain. Some older users don't even want to comment anymore, since it's all the same thing re-posted all over again. Well that and bitcoin/NSA "news".
Another problem is that lgamma runs in constant time but it’s accurate only in a finite range. To extend the range the polynomial must have more terms, so the “accurate” version doesn’t run in constant time. And for enough precision, you must use the “long” version of the floating point numbers. The complete analysis is complex and I don’t have enough time to do it.
The recursive functions runs in linear time only for small numbers. It uses only integer arithmetic that is faster. But if the number are big enough you must use “long” integers, and the time is probably quadratic.
So it’s not clear which version is better, neither for small numbers nor for big numbers and asymptotic behavior.
Typically you wouldn't use lgamma to compute exact values of factorials at all. For a real world example of this function in action, look at John D. Cook's blog:
In general, it's application-dependent. But often, in applications, the factorial enters multiplicatively, and they are in products with other stuff; the whole thing can easily underflow or overflow.
So what you often really want is the log of the factorial. (If someone gave you the exact integer factorial for free, the first thing you'd do is take its log.) That's why they implemented lgamma(). The error guarantees it gives are in the log domain, which is (often, not always) what you want, anyway.
https://news.ycombinator.com/item?id=4796586
https://news.ycombinator.com/item?id=4915328
Also:
Refuting “The Mathematical Hacker”: https://news.ycombinator.com/item?id=4921953