Linux Today: Linux News On Internet Time.

Speculating on page faults

Jan 21, 2010, 18:32 (0 Talkback[s])
(Other stories by Jonathan Corbet)

"Improving the performance of the kernel is generally a good thing to do; that is why many of our best developers have put considerable amounts of time into optimization work. One area which has recently seen some attention is in the handling of soft page faults. As the course of this work shows, though, performance problems are not always where one thinks they might be; sometimes it's necessary to take a step back and reevaluate the situation, possibly dumping a lot of code in the process.

"Page faults can be quite expensive, especially those which must be resolved by reading data from disk. On a typical system, though, there are a lot of page faults which do not require I/O. A page fault happens because a specific process does not have a valid page table entry for the needed page, but that page might already be in the page cache, in which case handling the fault is just a matter of fixing the page table entry and increasing the page's reference count; this happens often with shared pages or those brought in via the readahead mechanism. Faults for new anonymous pages (application data and stack space, mostly), instead, can be handled through the allocation of a zero-filled page. In either case, the fault is quickly taken care of with no recourse to backing store required."

Complete Story

Related Stories: