Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How is this supposed to work given the JVM does not support tail calls?


In general speech, I can get behind the idea that you shouldn't correct someone if you understand what they mean. In programming, however, I think it's important to be pedantic.

You must mean tail call optimization/elimination here. Clearly, the JVM supports tail calls.


Plus, as it is relatively simple to perform tail call elimination with iteration instead of reusing the frame in many (most?) cases, static analysis can classify those cases and hygienically rewrite those functions.

Most attempts I've seen to exploit TCO either aren't optimizations or can be dealt with via another control mechanism. My suspicion is that it is largely a solution in search of a problem for the programmer attempting to employ the technique, much like using AOP to add logging information for every "enter" and "return" from a method/function.

Mind you, I am in no way dismissing tail-call elimination as an important tool. Just that some of its practitioners (such as whichever idiot who wrote some of the barely-readable code in old programs of mine!) are a bit zealous in its use.


If you want to be pedantic, the JVM does not support tail (function) _calls_, but jumps, which may be the result of tail call optimization or elimination.

As far as I know, we still cannot efficiently turn mutual recursive calls into a chain of jumps, but I'll be glad to be corrected here.


I think there may be a misconception here: A tail call is NOT a call that doesn't allocate a stack frame. A tail call is a call that is the return value of a function. This is why we refer it the stack frame-eliminating optimization as "tail call elimination". We are taking a jump-to-subroutine (call) instruction and replacing it with a normal jump instruction, thus eliminating a "call" from our program.

The JVM most certainly does support general tail calls through its invoke instruction, which supports calls to arbitrary methods of arbitrary objects (which, of course, includes tail calls).

Some compilers also support the optimization of _some_ recursive tail calls (usually recursive tail calls to final or local methods) into loops using the goto instruction, which supports jumps within the current method (essentially optimizing the tail-recursive method into a non-recursive method containing a loop).


Thanks for the clarification.

If I read you correctly, you say, the JVM 'supports tail calls', because it can call other methods. I think, the phrase 'supports tail calls' is as meaningful as 'supports function calls' then.

In any case, I don't think this is a question of optimization. I don't see how, for example, an F# program written in CPS can ever run on the JVM.


Scheme implementations will compile any tail calls into jumps, even if they're mutually recursive. This is easy to understand by doing CPS transformation by hand on such calls.


I think everyone knows what he meant. In programming, everything supports everything anyhow, as long as it's Touring complete.


This isn't really fair: some implementations really do rule out certain optimizations. Turing-Completeness only refers to what is computable, not what actual algorithms and optimization techniques are possible.


And to be more specific, Turing completeness refers to what is computable with an unbounded amount of memory and time. This is why we have complexity theory layered on top of computability theory. Some things are theoretically computable but practically infeasible. Compiler optimizations can help greatly in some of those cases.


Maybe someone who knows Scala or Clojure internals can answer this question..


scala doesn't optimize general tail calls, either (only direct tail recursion of final methods). If you want to optimize tail calls on the JVM, you must use trampolining, which adds some overhead to every call. So most languages choose fast calls without tail call elimination over slower calls with tail call elimination (i.e., speed over correctness).


Support for tail calls pretty much depends on the actual runtime implementation.

If it is important to you (it certainly is to me), use an implementation which supports proper tail calls.

I'm doing it and I have never looked back.


Out of curiosity, which implementation are you using?



This is very cool. Thank you!


Someone has made TCO possible in Clojure already: https://github.com/cjfrisz/clojure-tco


That's very cool. But note that as far as I can tell it only handles mutual recursion, not arbitrary tail calls (e.g. calls to a function argument).


Clojure has a loop/recur construct which makes the tail recursion explicit so doesn't need to be done by the JVM. In other words regular recursive calls should not be used for loops if you want performance.


Scala has a similar explicit @Tailrec annotation for its compiler, too:

"A method annotation which verifies that the method will be compiled with tail call optimization. If it is present, the compiler will issue an error if the method cannot be optimized into a loop."

http://www.scala-lang.org/api/current/index.html#scala.annot...


I think rail recursion can be handled automatically by the compiler without any support from the jvm (scala does it).

The issue is different for general tail call elimination.


Can't you lambda-lift & fully CPS-convert the code somewhere throughout compilation, then use a trampoline at runtime?


I'd say that's a huge performance hit (return, check, call instead of jump). Plus, your JIT might not like it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: