This is something of a pet peeve, so I'll reply. Let's say that the only stack trace entries TCO will eliminate are not only useless, but actually impede debugging. Let's say 'rf' is a recursive function. If it dumped core, without TCO you would see something like 'rf rf rf rf ...'. With TCO, you would see 'rf' once, just as if it dumped core while on a loop. Which is actually what TCO means: translation of recursive functions into functions that use loops, at the AST level. The result is code that's easier to debug (because it's not recursive at the stack trace level), easier to understand (because it's recursive at code level) and that performs better (because it's a loop at machine code level).
So yeah, you could make the case that it's not "an accurate stack trace", because it isn't. But, you don't want that. You want TCO. Trust me on this.
Preemptive strike for the bikeshedder crowd in the back :) : if you think you can deduce some useful fact from a recursive function that's stacktraced, like for example where in the loop the function gave an error, then you're wrong. Go learn to use a debugger.
Traceback (most recent call last):
File "raise.py", line 10, in <module>
foo()
File "raise.py", line 2, in foo
bar()
File "raise.py", line 5, in bar
baz()
File "raise.py", line 8, in baz
raise Exception('OH NOES')
Exception: OH NOES
If Python did TCO, you'd get:
Traceback (most recent call last):
File "raise.py", line 8, in baz
raise Exception('OH NOES')
Exception: OH NOES
Not wanting to lose helpful information like this is a valid cause for concern. I used to think Guido was wrong for not wanting TCO in Python but I've yet to see anyone give a clear example of what having it would let you express more easily than you can now with iterators/loops/yield etc.
TCO is a fine feature in languages where its deeply engrained and part of its natural idioms. Bolting it on twenty years later doesn't seem that helpful to me.
Off the top of my head I can think of a doubtless unsophisticated way to recover the "stack" in this situation: namely, the same way you recover the "stack" in the factorial example, by using an accumulator hidden in the interpreter. When foo calls bar(), the interpreter passes along, when it makes the tail call, an explicit list of predecessors (in this case ["foo()"]); when bar calls baz(), the invocation of baz receives ["bar()", "foo()"], etc.
Some Schemes (like MIT Scheme) do this with a circular buffer of bounded capacity. Another Scheme interpreter (a little one by Jake Donham) had a switch to turn off TCO to help debugging. Yet another approach is full omniscient debugging (being able to go backward in time).
Yeah. But you could decide how much of the stack to keep, at least. Nevertheless, I know that some smart persons (I just can't remember who!) have addressed this issue in a significantly subtler way---wish I could find the reference.
Parent wasn't suggesting to keep full stack frames, but something like (file/function name/line number) tuples, that's not the end of the world. By making it a fixed-size list, you can ensure your memory usage won't blow up.
So yeah, you could make the case that it's not "an accurate stack trace", because it isn't. But, you don't want that. You want TCO. Trust me on this.
Preemptive strike for the bikeshedder crowd in the back :) : if you think you can deduce some useful fact from a recursive function that's stacktraced, like for example where in the loop the function gave an error, then you're wrong. Go learn to use a debugger.