Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

isn't usually the transfer of data between memory and cpu the bottle neck and not the cpu itself?

e.g. http://www.pytables.org/docs/CISE-12-2-ScientificPro.pdf



The most common bottleneck in my work experience has been disk I/O when the data set would not fit into memory. And every professional data set I've worked with exceeds 1TB. Perhaps there are other bottlenecks, but disk seeks and (for one place) their iSCSI over gbit ethernet nonsense ruled the performance challenges.


Usually yes, but that is why you take advantage of specialized CPU instructions for bulk loading and operating on data. From what it says that is part of the optimization that these folks are taking advantage of (see comment mentioning SSE instructions).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: