Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To setup a web server we need to install many different components: database, server, may be some cache, balancer, fastcgi handlers and so on. And every part must be connected somehow. But any connection introduce significant latency and limit performance of overall system. So, if there is only one physical machine, why bother with it? Lets join all components to one process. Simple call within process is best avaliable to world communication channel, it is fast and simple.

So, what happens when you need to scale it to more than one machine?



Depends on what you scaling. Actually there is no borders in such approach. If you app overloaded with template parsing you could move it to another machine, if problem in "database" thing same solutions came as for NoSQL. There is no limit what to scale and how. You could have everything in one process exept something that need scaling and this would be more efficient than dividing things on databases, cgi and balancers.


What makes you think that IPC is the dominating cost?? It's usually network or disk bandwidth, not IPC. And once you scale past one machine, you can't avoid network chatter by definition.


All this is not only about ipc. Removing some ipc will give a little speed-up in latency - nice bonus, nothing more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: