Async IO is not chosen because a language isn't fast enough, it's because IO isn't fast enough. Async IO would be useful even if you handwrite everything in the best and fastest assembler possible because in the time it takes to go out and read some data (even on an SSD) you can compute many, many more things. The purpose of async IO is to permit the interleaving of fast and slow activities. If I were still doing scientific computing (it's been a while) I'd want async IO (some implementation of it) even if I had the best GPU for my number crunching bits. Loading the model takes a lot of time and there are other activities the program can engage in during that delay.
CL's lack of a standard way to do async IO (in this particular instance) means that even if you have a very fast program, you end up with either a bottleneck waiting for IO or you've recreated the async IO model from other languages (or their standard libraries or their de facto standard libraries) and are now responsible for it.
> The purpose of async IO is to permit the interleaving of fast and slow activities.
You absolutely do not need async IO for that. You can do that with a synchronous programming model just fine, using threads and letting the OS do its thing.
Async is an absolutely horrible programming model. There are really only two advantages is brings: It lets you avoid the memory and context switching overhead of threads if you need to handle a very large number of requests concurrently, and it lets you do concurrency without worrying about synchronizing access to shared resources (but there are much better ways to do that).
> You absolutely do not need async IO for that. You can do that with a synchronous programming model just fine, using threads and letting the OS do its thing.
"just fine" is highly relative here and is doing most of the work in your statement. Do you have any conditions under which this is "just fine"? Any more detail? Or else it's just a personal preference.
A preference? The term was mostly meant to convey that it can be done without problems, contradicting the assertion that async IO "permits" the interleaving of fast and slow activities.
Synchronous models have been used for that far longer and more often than asynchronous ones. CGI scripts, Perl, Java servlets all did synchronous IO while also interleaving slow and fast activities for decades before Node made async IO fashionable.
An in my second paragraph, I specifically mentioned the specific conditions where async has advantages.
> Synchronous models have been used for that far longer and more often than asynchronous ones. CGI scripts, Perl, Java servlets all did synchronous IO while also interleaving slow and fast activities for decades before Node made async IO fashionable.
Node didn't make async IO fashionable, as much as you seem to want the beat the drum of "new/hype" vs "old/Lindy/mature". Synchronous IO was slow, memory-inefficient (to spawn multiple threads), and didn't scale well. Slow enough that the C10K [1] problem was framed to capture the issues. Event loops in net servers were mostly popularized by nginx [2] to explicitly solve the C10K problem, which is what started driving folks to use event-loop async programming. Moreover event loops had been popular for years in GUI programming before multiple cores simply because CPU-level parallelism just _was not possible_ (well except for ILP which is a bit different). For example, Tcl/Tk had an event loop driving GUI display logic for ages [3], which really is the same problem. Instead of waiting on NIC events, we were waiting on keyboard/mouse events instead.
Just because it's old doesn't mean it's good. There are lots of old bad things and lots of old good things, just like there are lots of new bad things and lots of new good things.
As far as I can tell, Node made it fashionable to use the async programming model for the entire application in server code. I'm sure it was done before, but not nearly as commonly, especially not to the point that people, like the commenter I originally replied to, consider it a prerequisite to "permit the interleaving of fast and slow activities", even for scientific computing.
> Synchronous IO was slow,
Not fundamentally though, but because of implementation details like bad thread scheduling algorithms.
> memory-inefficient (to spawn multiple threads), and didn't scale well
Yes, and I explicitly acknowledged that in my original comment. But it's efficient and scalable enough for most applications (which don't have millions of users) and is still very widely used, and people have found ways to make it more memory efficient as well.
> Event loops in net servers were mostly popularized by nginx [2] to explicitly solve the C10K problem, which is what started driving folks to use event-loop async programming.
SO nginx came out in 2004, node in 2009. Did the 5 years in between really see a lot of people writing asynchronous application code? In what language? It's quite possible that that was a development I missed at the time.
> Moreover event loops had been popular for years in GUI programming before multiple cores simply because CPU-level parallelism just _was not possible_ (well except for ILP which is a bit different). For example, Tcl/Tk had an event loop driving GUI display logic for ages [3], which really is the same problem. Instead of waiting on NIC events, we were waiting on keyboard/mouse events instead.
Yes, event loops are the standard in GUI programming almost everywhere. Java Swing did that too - but with an otherwise entirely synchronous multithreaded programming model.
But the reason for that is most definitely not that "CPU-level parallelism just was not possible", nor has it anything to do with "waiting on events" necessitating an event loop as your entire programming model. The reason is that multithreaded GUI toolkits have been repeatedly tried (all the way back to Xerox PARC, long before multicore hardware) and found to lead to insurmountable issues with deadlocks: https://web.archive.org/web/20160402195655/https://community...
CL's lack of a standard way to do async IO (in this particular instance) means that even if you have a very fast program, you end up with either a bottleneck waiting for IO or you've recreated the async IO model from other languages (or their standard libraries or their de facto standard libraries) and are now responsible for it.