Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The insane obsession with scaling is killing this industry. So much effort is being wasted trying to use NOSQL or K8s at companies that have DAU counts in the low hundreds. Absolutely asinine.


But is their business model targeting hundreds of DAUs, or is it targeting millions of DAUs? It doesn't make sense to architect for an amount of usage that is too small to sustain the business. If the business model requires millions of users to be successful, you should build for millions of users, even if you only have hundreds at the present moment.


The reality is that companies at an early stage don't have validation of their idea yet. That is priority #1. The scaling part comes only after you know your idea has merit. It's also "the easy part". With the right allocation of resources and talent, most scaling challenges can be solved. The same cannot be said of creating valuable / interesting products.

The right approach is to optimize for speed and flexibility. Make it as easy as possible to validate your idea. Make it easy to tear down and rebuild in a "scalable" way if you're lucky enough to make it past step #1.


If you build for millions of users the way most devs do, your business will flounder before you finish.

Facebook, Twitter, and Google all started simple and scaled only when it became necessary. If you want to do what they're doing, start by doing what they did. Don't skip to the microservices.


We don't actually disagree, but I believe Twitter is actually one of the cautionary tales. Their product went through a long period of stagnation because they were too busy toiling at scaling. Same thing seemed to happen to Github, from my perspective.


This is exactly right. Prove your business with the bare minimum tools required and then when you have enough paying customers go out and hire lots of engineers to figure out how to make it scale.


Someone told me once it's more important to design things to scale 1 or 2 orders of magnitude and be easily replaced. What you think you'll need 2 or 3 orders of magnitude from now is rarely what you need when you get there.


Yes, but it's also easy to get bogged down reimplementing, right when you have more important things to focus on to keep up with growth. Designing things to be easily replaced is easier said than done.


But when you do have proof of growth it’s much easier to finance more developers to work on the problem.


The point is you end up having to replace it anyway. Or you get bogged down maintaining and integrating with an overly complex system.


No, you should use proper abstractions to allow you to "easily" replace components down the road. There's no point in planning, designing, or building your application for a million users because by the time you get there: 1 - the business needs will be radically different from what you expected initially, and 2 - you'll have dozens more engineers who will probably do the actual work. It's exceedingly unlikely that the architecture decisions that you made will be relevant at that time.

Also, the overwhelming majority of companies that claim that they need to scale into the millions of DAU will never make it there.


I find trying to decide on "proper abstractions" can also cause the same issue as trying to design for scale: you have to make complex design decisions based on how the future might turn out (which components might you need to replace? what might their future interface requirements be?).

I think it might be more efficient to do whatever is fastest/easiest based on what you know now & always plan on refactoring when you know more. So you end up trying to write less, simpler, code knowing you're going to tear it apart soon.

Which I think still fits with your overall point.


Obsessing about scaling is what makes this industry. How else would we keep up software engineering demand?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: