Slightly OT, but the I/O inflation caused by raising read ahead to 2048K seems to be pretty huge. For a DB server that might only be reading a few K, I can see why it caused an issue.
Anybody knows why Ubuntu did that? I would have expected that in the age of SSDs, read-ahead becomes less useful.
I don't think he says anywhere in the article that Ubuntu default was to use 2048Kb RA. On my 14.04 LTS installation all block devices are set to use 128Kb by default. Maybe they were experimenting with it after the upgrade.
In 2014 this should all really be auto tuned - how hard it is for a user space tool to figure out that a disk is SSD, do a seek latency and read speed test to conclude it will be better off without RA? (Thinking of it though RA brings data in RAM which is still orders of magnitude faster than reading from SSD. So may be there is _some_ benefit if you don't increase the IO overhead enough.)
Anybody knows why Ubuntu did that? I would have expected that in the age of SSDs, read-ahead becomes less useful.