Sorry, but you’ve been fooled by the web and functional programmers propaganda against performance.
Also, what the author seems to mean is that this is faster than specific python, js and go clients.
Anyway. At the IO boundary, there’s a lot of syscalls (polls, reads, mallocs especially), and how you manage syscalls can be a major performance sticking point. Additionally, serialization and deserialization can have performance pitfalls as well (do you raw dog offsets or vtable next next next). How you choose to move memory around when dealing with the messages impacts performance greatly (ie, creating and destroying lots of objects in hot loops rather than reusing objects).
If that was true, then there wouldn’t be database clients that are 100% faster than competitor clients and web servers that are several hundred % faster than slower ones, don’t you think?
“IO therefor slow” is a “myth”. IO is slow, but the 1,000 millisecond request times from the server you’re sitting beside is not slow because of IO.
When experts say “IO is slow”, they’re not saying that you should therefor disregard all semblance of performance characteristics for anything touching IO. They are saying that you should batch your IO so that instead of doing hundreds of IO calls, you do 1 instead.
Also, what the author seems to mean is that this is faster than specific python, js and go clients.
Anyway. At the IO boundary, there’s a lot of syscalls (polls, reads, mallocs especially), and how you manage syscalls can be a major performance sticking point. Additionally, serialization and deserialization can have performance pitfalls as well (do you raw dog offsets or vtable next next next). How you choose to move memory around when dealing with the messages impacts performance greatly (ie, creating and destroying lots of objects in hot loops rather than reusing objects).
The slow bit can definitely be the client.