Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But that still presupposes a whole lot of knowledge about pianos that most people just wouldn't be expected to have. From what percentage of households have them to how often they need tuning to how long it takes to tune one. Unless you're a serious piano nut, you're at best throwing out random numbers, and likely not only don't know the answers, but don't even know what questions to ask. What about peak demand? Do lots of parents also sign their kids up for piano lessons at the start of each school year? Do pianos get played more often during the winter months?

It's like the Drake equation - it doesn't matter what factors you choose (you can always come up with more) or what values you assign them (you can't know what most of them would be anyway). It's meaningless as a whole. And also irrelevant to software development or most anything else; at best it would be only very loosely relevant to managing a piano tuning company.



That's exactly what this tests for.

I have no idea what those values are! I am still able to make guesses to show that I can figure out a bunch of variables that affect the final value. And since I am multiplying data that's off maybe even by few orders of magnitude, errors multiply as well: I am aware of how erroneous input data affects the results too.

Is it relevant to software development as a whole? It definitely is.

You've got to provision web workers for new service being developed: you can only (guess)timate number of expected users, you can guess how much CPU time each request will take, figure out cost per unit, look at bandwidth requirements, and make your pick. To improve on your guess, you actually go and measure request times (p50, p99), decide on the number of users you can sustain with your team cost effectively, measure network usage, etc.

Or you need to decide on what type of DB to use? Depending on access patterns (read heavy, write heavy, time series...), data size and scalability demands, team familiarity with different tech, you make a guess at what should work: you can only measure after you've got the system exposed to real, live load.

Now, I still think it's a bad interview question: people can be perfectly comfortable answering above questions, yet not see that they are in abstract exactly the same as the number of piano tuners in NYC. It tests for that generic and general ability to estimate anything, and to break down any problem.

Yet some people get flabbergasted (like you seem to do?), since they think too much about specifics of each of the numbers they use and how wrong they might be. Basically, they have a hard time of letting go of the inaccurracies :)

As such, it's a mixture of tests: supposedly only tests you problem breakdown skills, but in reality tests your confidence in using inaccurrate numbers or lack of problem domain knowledge to solve a problem.

Which makes it a bad interview question unless both of those is what you want to learn about someone.


> It tests for that generic and general ability to estimate anything, and to break down any problem.

Ability to BS and multiply, more or less. Feels like a waste of a question even for the one asking it!

> Yet some people get flabbergasted . . . since they think too much about specifics of each of the numbers they use and how wrong they might be. Basically, they have a hard time of letting go of the inaccuracies :)

Having a hard time letting go of inaccuracies seems justified if you're in a more technical profession. Too often there's good engineering reasons for not wanting to excuse loose reasoning with bad guesses. On the software dev side, there are also too many cases where those kinds of fudged numbers are inevitably used against us.


Being able to tell when highly accurate approach is required and when it is not is as much a quality in a technical profession as being simply accurate is.


I have no problem with the question being abstract; my main gripe lies in the first part of your explanation, compounded noise is still noise. Why go through all the mental gymnastics if you yourself are aware that it can be way off? Your next-in-chain-of-command will treat it as hard fact and make a decision or commit to something based on it- then you have all these piano tuners loitering around and nothing to do.

It's similar to estimating novel work.


The thing is that it gives you questions to ask. I am sure NYC population is a web search away. Or tuning frequency recommendations for pianos. Or how long does tuning a piano really take? Some are certainly harder (how frequent are piano-households?), but maybe someone already looked into it.

It shows that one can break a problem down into more manageable pieces. Being more manageable means you can get a better estimate by delving deeper.

The question is not about finding the right answer, but about employing the right methodology.

And if you've got next-in-the-chain taking estimates as hard boundaries or facts, you need to be explicit they are not, and hopefully teach them to be a bit more suspicious of everything.

Even then, it's ok if someone else takes your ballpark and runs with, as long as you are willing to take responsibility. If you are higher in job title, be explicit this is what you are asking them to do, and that you will take the responsibility! Invite them to improve upon it as they learn more, though!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: