Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've had very good results simulating an in-memory database with conventional RDBMS's by simply running the databases with a high ratio of physical memory to data - on the order of 128GB of memory for a 500GB database.

The advantage is that COTS applications work as is, you don't need to wait for researchers to dream up something new, you don't need to re-write an application to use the latest fashion in database technology, and most importantly, poor design & poor coding can be papered over with fast logical I/Os (in memory).

In conventional databases configured with high memory->data ratios, we don't have traditional I/O bottlenecks on database reads. Hence I've made a simple decision - that it is cheaper to add memory (even expensive memory) than it is to build a disk I/O subsystem that can handle equivalent I/O.

Transaction writes are still a potential I/O issue though.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: