May 05, 2009, 07:31 (0 Talkback[s])
(Other stories by Paul Rubens)
Re-Imagining Linux Platforms to Meet the Needs of Cloud Service Providers
"To get data on or off a disk, a drive head has to swing into
the correct position and wait for the right part of the disk to
come around. This can take hundredths of a second -- many times
longer than it takes to access data stored in RAM, for example. As
a result, a disk I/O subsystem can be a huge data bottleneck, and
significant improvements in the overall performance of the server
can be achieved if the effects of this bottleneck can be
"To understand how this might be done, let's take a look at how
data is moved to and from a disk.
"Put simply, the I/O subsystem accepts a stream of requests to
read or write data to and from a disk which it holds in a queue. To
help speed things up, it usually merges read or write requests
together if they are close to each other in the queue, and if they
involve the same area of the disk.
"Read requests are generally given higher priority than write
requests because a given process will have to wait for the read
data it has requested before it can continue, while it won't have
to wait for the result of a write.
"The subsystem will also usually detect when data is being read
sequentially from the disk and use a technique called "read-ahead"
to read and cache the disk block following the one it has been
asked to read. This can reduce seek time during sequential reads,
although it does nothing to speed up reads to other random parts of
the disk, and it is switched off when the subsystem detects random
(i.e., non-sequential) reads."