Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As far as readability goes, this might have as well been written in machine language.


Except that a machine language program would be thousands if not millions of times as long.

Readability is relative. Java is unreadable to my grandmother. Turkish is unreadable to me. In other words, the opinion of someone who hasn't learned a language regarding its readability is irrelevant.

It is true that the way you read a language like this is different from the way you read Java. You don't expect to grok a line of code atomically, then move to the next line. It's more like reading math: you work with each line, understanding what makes it tick. No doubt that doesn't appeal to everyone. But the APL/K/J people do amazing things.

The ultimate answer as to whether or not this is just line noise? http://kx.com/Customers/end-user-customers.php


> Readability is relative. Java is unreadable to my grandmother. Turkish is unreadable to me. In other words, the opinion of someone who hasn't learned a language regarding its readability is irrelevant.

What a gem!

I should start a collection of those, every now and then you get these things you can just frame and look at three times a day here.


I'm just glad somebody got my point. It seems so obvious, yet it must not be, because the contrary is much more common. It especially bears repeating in the context of a largely undiscovered gem like K.


> Except that a machine language program would be thousands if not millions of times as long.

Oi, I seriously doubt that. Seriously, millions? It's all math. Take a couple of assembler instructions per operation. I bet there's 256 byte assembler demo's out there...


You're probably right. I was thinking maybe millions of 0s and 1s (if those count as tokens at the lowest level) but that's getting silly.


Breaking into many lines and renaming variables would go a long way to make this code look more like C.


For a transformation in the opposite direction, here's a first draft interpreter for J (a related language) in J-ish C.

http://www.jsoftware.com/jwiki/Essays/Incunabulum

It segfaults, because it's written in rather non-portable C that assumes sizeof(function pointer) == sizeof(int), among other things. It's an interesting piece of code, though.

I don't completely understand it - I'm confused by the concept of rank in J, and haven't worked with the language enough to get over that hump. (IIRC, K avoids "rank" entirely, and just uses either an array or an array-of-arrays for multiple dimensions.) I've only been a tourist in the APL family of languages, but what I've seen has left me quite impressed.


Please take this not as an insult but as invitation to offer help.

Rank shouldn't be something to get confused about; if it is you're probably overthinking it.

What did you find confusing about it?


I found the way that the linked interpreter above implements dimensions rather confusing, mostly.

Thanks for the offer, but I didn't get that far because I decided I was better off focusing on Erlang for the time being. I'm very curious about that language family, I'm just trying to not spread myself too thin. When I get back to it, I'll try working through the J labs - it's probably a much better way to pick it up than untangling a semi-obfuscated interpreter.


Yeah I've not studied that interpreter so if that's your specific interest no luck sadly; rank is an easy idea but wouldn't surprise me if there's some trick to it in an implementation that short.

I think it'd be easier to untangle if you already had a working knowledge of the language but given the line-noisy look of J I can't blame you for trying the shortcut.

The interactive J labs are very good. You might also find the J for C programmers discussion of rank particularly helpful if you are looking at learning by understanding an implementation.

I figure the array languages will make a minor comeback soon what with gpus / larrabee / etc showing up on the horizon and what with more-versatile input devices showing up (making it easier to go back to funky symbols instead of line noise).


I figure the array languages will make a minor comeback soon what with gpus / larrabee / etc showing up on the horizon

What specifically do you see as the fit between GPUs and array languages?


The fit imho is that at least if you stick to numerics you'd have to try pretty hard not to be writing code in a way that could be easily translated into something that'd be runnable on a gpu (or larrabee).

This isn't really a new idea: http://portal.acm.org/citation.cfm?id=579.357248&coll=GU...

...(and gpus / larrabee etc. aren't solely vector processors, but the idea is apparent).

Most of the bulk numeric actions in an array language map pretty nicely to the data-parallel approach you need to use to take advantage of a gpu or larrabee (if it ever shows up); in particular take a look through this:

http://www.ncsa.illinois.edu/~gshi/LRBni_cheatsheet.pdf

...and see how much more straightforward it'd be to take advantage of (compared to SSE and so on). Your interpreter has to be a little more sophisticated (work has to be kept in units of 512 bytes) but seems much more tractable than previously.

Since this isn't a new idea there's history to learn from; it was previously the case that you'd get a speedup from offloading work to the vector units but not really a cost-proportionate one. But now if you look at the performance differential between cpus and gpus and their relative costs it starts making sense again.


You say that like it's a good thing.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: