Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Using hashmaps by default isn't usually about raw performance for some small N, but to ensure your algorithm doesn't blow up if someone decides to take it out of context some day and dramatically increases the N (this is especially fun if it is called from some loop that also scales with input size in some way or another, which is very common). Yes you trade some raw performance for small N, but very rarely this is relevant or even measurable, and you get scalability in return.

Now what this article is showing is something else, which is that contrary to the claims in the spec, the benchmarked std::unordered_map implementation does not actually have amortized O(1) insertion time and that you can beat it with an O(N log N) sort if you want to deduplicate integers, for any input size. Which is interesting, but very specific, and not an observation that would carry over to other algorithms (or data types, for that matter. For example, it would be interesting to see what the graph would look like for types that have a more complicated/expensive way to calculate which one of two values is smaller).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: