Hacker Newsnew | past | comments | ask | show | jobs | submit | sabauma's commentslogin

The cost for the connect plans is what confuses me. For what you get, $5-$8 per month seems oddly expensive. I don't really use any of the features in the full connect plan. The cloud syncing is convenient, and I'd be willing to throw some money their way for it, but $5/month is a lot for what amounts to 8GB of storage.


I've replaced my Kindle with a Remarkable 2 and am generally pleased with the overall experience. The larger screen is nice, as is the ability to read/annotate academic papers on the same device.

There are a bunch of little things that would improve the epub experience, like improved navigation and bookmarks (never really made use of these on my Kindle though). The biggest missing feature IMO as a Kindle replacement is the lack of builtin lighting.


I find rmapi + fzf to be the easiest way to send ebooks in my Calibre library to my rM2.

Its probably possible to create a Calibre addon to do this, but its already makes syncing pretty easy.


This is indeed a consequence of tracing. The problem is that traces are associated to loops in the program, and since the map function contains only one loop, all traces for map are associated to the loop in its implementation.

When you manually write a loop, there is only one 'body', so a tracing JIT turns that into a single or small number of traces.

For the loop inside of map, you need to produce side exits and new traces for each function passed in. The more times map is used, the slower it gets. This is made worse by the fact that traces out of side exits tend to not be optimized nearly as well.

Pycket has this same issue with its builtin (or hand written) looping _functions_ like map. Fortunately, Racket provides many useful looping macros which allows Pycket to generate unique traces for each loop, ameliorating this problem somewhat in Pycket.


For the loop inside of map, you need to produce side exits and new traces for each function passed in.

Is it true to say that if two functions have the same body and arglist, and capture no variables, then you should be able to reuse the same traces?

This doesn’t happen very often in real code, but it’s useful to understand the problem.


In theory if you call map again with an identical function it should still fall inside the original trace. Tracing JITs effectively inline all function calls that happen inside the trace so if two functions contain the same body they would be indistinguishable.

But there are all sorts of reasons why this might not be happening in that bug report I linked to. It might be due to the definition of the map and reduce functions (luafun adds lots of features to them, so its not just a simple straightforward loop) or it could be due to something about luajit (it is a complex piece of software after all).

The only way to know for sure would be to examine the traces with "luajit -jdump"


I think in the case of a JITed function in a dynamic language, whether the body is the "same" or not depends on how the interface used by the function is monomorphized - which in turn depends on the trace.


I haven't checked in on the development of Pycket in a bit, but much of the recent work has gone into supporting linklets. Last I checked, the performance story for the Scheme and Shootout benchmarks used in the paper haven't changed much.

There are some benchmarks which evaluate Pycket's performance when applied to gradual typing [1]. Pycket does a pretty good job of optimizing the overheads of gradual typing (or at least Racket's implementation strategy), though it by no means eliminates the performance costs.

[1] https://dl.acm.org/citation.cfm?id=3133878


There has been some discussion of Meltdown and Spectre on the Mill forums:

https://millcomputing.com/topic/meltdown-and-spectre/


The REPL essentially operates at the global scope, which is represented as a dictionary. Variables local to a function are not stored in a dictionary, however:

    def main():
        a = 1
        d = locals()
        print d
        d['b'] = 123
        print b
        print d
        print d is locals()

    main()
Which prints

    {'a': 1}
    Traceback (most recent call last):
      File "test.py", line 23, in <module>
        main()
      File "test.py", line 19, in main
        print b
    NameError: global name 'b' is not defined


That's because the compiler doesn't know the locals() dictionary will be modified at runtime, so it treats b as a global variable (which happens to be undefined).

To show that not even changes to existing local variables work, try

  def main():
      a = 1
      d = locals()
      print d
      b = None
      d['b'] = 123
      print b
      print d
      print locals()
      print d is locals()
which prints

  {'a': 1}
  None
  {'a': 1, 'b': 123}
  {'a': 1, 'b': None, 'd': {...}}
  True
I.e. changes to the dictionary returned by locals() are ignored, and calling locals() again overrides the previous values.


Chez is pretty tough to compete with in terms of Scheme performance. On the benchmarks presented in the paper, Chez averages faster than all the systems presented, even Pycket post warmup. Its not the fastest on every benchmark, but gives consistently good performance without huge warmup time, which is Pycket's biggest weakness. With Racket planning to switch over to Chez, it may be difficult to justify Pycket's existence, especially if some of Chez's known performance sore spots receive attention.


Is Chez AOT compiled or JIT based? If it is AOT, would the REPL still be interpreted and just a finished program AOT compiled?


Chez is an AOT compiler. I am not sure how the REPL operates, but I believe it just compiles expression on the fly before executing them. I suppose you could characterize that as JIT compilation, but the optimizer does not make use of any runtime profiling information. Racket's "JIT" is similar in that code is generated for a function when it is first called, but the optimizer is run on the bytecodes of the program before anything is run.


In Common Lisp it's fairly common for eval to compile the code and then execute it. For example, sbcl's repl usually compiles the entered expression and then executes it: although recent versions also provide an interpreted mode.


> I suppose you could characterize that as JIT compilation, but the optimizer does not make use of any runtime profiling information.

That is JIT compilation. AFAIK James Gosling started using the phrase in the 1990s to pretend that it was something novel and to ignore the fact that that is how Lisp and Smalltalk systems have always worked. This is really confusing today because a lot of people started using "JIT" to be synonymous with dynamic recompilation techniques like tracing.


Ah. So if Racket uses the Chez Compiler it could use it for the REPL as well.


Probably because many tail-recursive functions _rely_ on tail-call elimination working reliably. Without also having an unbounded call stack, disabling tail-call elimination will likely just cause your programs to crash.


The notion that not having proper tail calls aids debugging always seemed like a post-hoc justification. The stack trace of an iterative function will lack exactly the same intermediate evaluation frames as a tail-recursive implementation.


The thing is, tail calls aren't _just_ about emulating iteration via recursion:

  def foo():
      raise ValueError

  def bar():
      return foo()

  bar()
With TCO, the stack trace would contain `main` and `foo`, as `bar`'s frame would be overwritten by `foo`. This example is simple, but `bar` could be a 50 line long if-else chain of tail calls and when debugging you won't necessarily know which condition was evaluated.


> The thing is, tail calls aren't _just_ about emulating iteration via recursion:

I completely agree, but there is also no need to perform TCO to make code like this safely runnable. TCO only becomes necessary/useful when implementing an iterative process where we can't statically know that the call stack won't be exhausted. That said, TCO is usually an all or nothing transformation, and it would be difficult to accurately avoid eliminating trivial tail calls like in your example.

A reasonable compromise might be for the Python VM to implement a TAIL_CALL bytecode op and require the programmer to decorate functions which rely on TCO. This wouldn't be any more onerous than manually trampolining large portions of code, which is the current method of getting around the lack of TCO.


A decorator that enabled TCO makes sense to me. Kind-of like the Numba project, it'd be a specialized JIT-compiler invoked only on some functions.

What's stopping that from being a 3rd-party library like Numba?


You can find simple decorators which try to provide space efficient tail recursion. Usually they work by trampolining the function. I've seen one example where a decorator rewrites the bytecodes to perform a goto in the case of self recursion. The problem is that all of these solutions are rather limited, easy to break, or have a pretty high runtime overhead. The general solution would be for a decorator to rewrite all CALL opcodes in tail postion to TAIL_CALL opcodes, but such an opcode currently does not exist. The actual implementation of a TAIL_CALL opcode would be almost idential to the CALL opcode, so adding it would probably be straightforward, but I'm speculating here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: