Except that a machine language program would be thousands if not millions of times as long.
Readability is relative. Java is unreadable to my grandmother. Turkish is unreadable to me. In other words, the opinion of someone who hasn't learned a language regarding its readability is irrelevant.
It is true that the way you read a language like this is different from the way you read Java. You don't expect to grok a line of code atomically, then move to the next line. It's more like reading math: you work with each line, understanding what makes it tick. No doubt that doesn't appeal to everyone. But the APL/K/J people do amazing things.
> Readability is relative. Java is unreadable to my grandmother. Turkish is unreadable to me. In other words, the opinion of someone who hasn't learned a language regarding its readability is irrelevant.
What a gem!
I should start a collection of those, every now and then you get these things you can just frame and look at three times a day here.
I'm just glad somebody got my point. It seems so obvious, yet it must not be, because the contrary is much more common. It especially bears repeating in the context of a largely undiscovered gem like K.
> Except that a machine language program would be thousands if not millions of times as long.
Oi, I seriously doubt that. Seriously, millions? It's all math. Take a couple of assembler instructions per operation. I bet there's 256 byte assembler demo's out there...
It segfaults, because it's written in rather non-portable C that assumes sizeof(function pointer) == sizeof(int), among other things. It's an interesting piece of code, though.
I don't completely understand it - I'm confused by the concept of rank in J, and haven't worked with the language enough to get over that hump. (IIRC, K avoids "rank" entirely, and just uses either an array or an array-of-arrays for multiple dimensions.) I've only been a tourist in the APL family of languages, but what I've seen has left me quite impressed.
I found the way that the linked interpreter above implements dimensions rather confusing, mostly.
Thanks for the offer, but I didn't get that far because I decided I was better off focusing on Erlang for the time being. I'm very curious about that language family, I'm just trying to not spread myself too thin. When I get back to it, I'll try working through the J labs - it's probably a much better way to pick it up than untangling a semi-obfuscated interpreter.
Yeah I've not studied that interpreter so if that's your specific interest no luck sadly; rank is an easy idea but wouldn't surprise me if there's some trick to it in an implementation that short.
I think it'd be easier to untangle if you already had a working knowledge of the language but given the line-noisy look of J I can't blame you for trying the shortcut.
The interactive J labs are very good. You might also find the J for C programmers discussion of rank particularly helpful if you are looking at learning by understanding an implementation.
I figure the array languages will make a minor comeback soon what with gpus / larrabee / etc showing up on the horizon and what with more-versatile input devices showing up (making it easier to go back to funky symbols instead of line noise).
The fit imho is that at least if you stick to numerics you'd have to try pretty hard not to be writing code in a way that could be easily translated into something that'd be runnable on a gpu (or larrabee).
...(and gpus / larrabee etc. aren't solely vector processors, but the idea is apparent).
Most of the bulk numeric actions in an array language map pretty nicely to the data-parallel approach you need to use to take advantage of a gpu or larrabee (if it ever shows up); in particular take a look through this:
...and see how much more straightforward it'd be to take advantage of (compared to SSE and so on). Your interpreter has to be a little more sophisticated (work has to be kept in units of 512 bytes) but seems much more tractable than previously.
Since this isn't a new idea there's history to learn from; it was previously the case that you'd get a speedup from offloading work to the vector units but not really a cost-proportionate one. But now if you look at the performance differential between cpus and gpus and their relative costs it starts making sense again.