Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It was a preventable accident.

Was it really preventable? How should we know when to make a quick decision and when to A/B test?

[Clarification: it seems to me that it's better to decisively get things done]

Considering that the most widely known A/B tests are based on 41 shades of blue and pixel-perfection, I'm not sure that this is as obvious as the article claims.



the most widely known A/B tests are based on 41 shades of blue and pixel-perfection

This is not true among people who actually A/B test for a living. (The 41 shades of blue thing is a test cherry picked to suggest that testing is not material. The only reason the world knows about that test, as opposed to others conducted by Google/MSN, is because someone who believed they didn't fit in a culture of testing called out that test as the reductio ad absurdum of that culture.)

Without saying exactly what it was that the client didn't know at the time, suffice it to say that if you got five A/B testing practitioners in a room and asked them for the top five things to try on that client's site, every last one would have listed the problematic area as something to test. I mean, it wasn't the H1 on the front page, but it could have been.

This is similar to "How do we make our pages load faster?" Are there large amounts of subjectivity and risk involved here? Yes. Trying to outguess your favorite SQL query optimizer sometimes feels like reading chicken entrails. But, if you're not using gzip yet, then you should turn on gzip, because gzip always wins.


I accept that you are an expert and that there is value in this expertise, but I remain unconvinced that it's so obvious where to execute A/B tests (in contrast to page load time, where bottlenecks can be measurably identified).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: