Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

   Surely that's worth a few hours of your time. At the very
   least it's worth more than arguing on the internet about 
   how you don't need to know it.
That's a great point.

Regarding your interview question, the kind of optimization you're getting into isn't something 99% of 'programmers' will ever touch.



I suspect you can spin the demographics in lots of directions. But I'd hope that a far higher percentage of startup technical founders (the audience to which the comment was directed) would need to do non-trivial performance estimation in the course of building their products.


I think the point is that they don't.

Tell me any of the sites you visit in which this knowledge would be critical to deploying whatever service/product they're offering? I would imagine it would be pretty close to 0.


Google. Amazon. Bing. Twitter. Facebook.

When you start hitting massive amounts of traffic, these kinds of optimizations can save you a TON of money in equipment and bandwidth.


Facebook was implemented in PHP, and still is (though now compiled, whatever). Google was, at a time, implemented in Python. Twitter was built on Rails, probably even ran on MRI. Maybe they switched to Java, who knows. Hardly hardcore-optimizable languages.

Not to mention that premature optimization may be bad, but even when you need it, it's better to just write a few algorithms and test them against real data.


The point isn't so much to think about optimizing assembler, but rather making higher-level algorithmic choices that mesh with your platform. Even if you don't sift through the Ruby bytecode, you can still apply some knowledge about what kind of data structure will be better in which kind of situation.

For example, if posed with the question presented here, if you realize it will not be as fast as you might hope, maybe you can make the choice to build a data structure with much less depth- maybe you can fit it all in a hash table?- and achieve huge performance wins without once touching assembler. (With a 27-deep tree, you need to make 27 reads from memory to find your data; reducing the depth reduces the required reads)

(I am not a computer scientist, and I am not an expert in algorithms, but hopefully I'm not too far off here.)


As far as I know, Google was never implemented in Python. However Python is one of the three major languages at Google. (Mostly used for non-performance critical scripting stuff. If they care about performance it is in C++ or Java.)

The others are correct.


Last I checked, PHP, Python and Ruby were implemented on computers with DRAM memories. All that analysis applies to them too. "Following a reference" is a memory load (or several) in all languages.


You seriously aren't willing to admit that your question has zero relevance to someone working in PHP?

The entire point of coding in a language like PHP is that you don't think about the low level stuff. And even if you did take the time to truly understand the source of PHP and made some fabulous optimization, they could change implementation details in a future release and break what you did.

Bottom line is that your interview question has almost no correlation with whether someone would be a good PHP developer. You might as well ask what the capital of Macedonia is as that is just as good of a screen for potential applicants.


More likely for people on that path, there's bigger wins from paying down technical debt that exists at a higher level than by trying to optimize your code's memory access patterns.


I agree completely. Frontend optimizations can probably give a lot more in the short term than these kinds of backend optimizations.

Unfortunately, even if it's paying down technical debt, if you're trying to squeeze out more cycles with your low-level code, a lot of the time it's issues on the algorithm level. When refactoring there, these kinds of optimizations should always be at the back of your head, even if it's just to explain why moving the for loop up 5 lines gave the function a 2% performance boost.


That depends. Money spent on hardware buys you linear performance improvements. Algorithmic improvements can buy you exponential performance improvements.

There is most certainly a point where money spent on hardware gives a better return, but all the hardware in the world couldn't net you acceptable runtimes using bubblesort on colossal datasets.


Again, it's a very specialized gig. Each of those companies probably has a small handful of people who work on optimization and could answer the question we're talking about here. But the vast majority of employees in these companies work in higher level languages and never have to think about the specifics of memory management at all.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: