forked from Minki/linux
readahead: sequential mmap readahead
Auto-detect sequential mmap reads and do readahead for them. The sequential mmap readahead will be triggered when - sync readahead: it's a major fault and (prev_offset == offset-1); - async readahead: minor fault on PG_readahead page with valid readahead state. The benefits of doing readahead instead of read-around: - less I/O wait thanks to async readahead - double real I/O size and no more cache hits The single stream case is improved a little. For 100,000 sequential mmap reads: user system cpu total (1-1) plain -mm, 128KB readaround: 3.224 2.554 48.40% 11.838 (1-2) plain -mm, 256KB readaround: 3.170 2.392 46.20% 11.976 (2) patched -mm, 128KB readahead: 3.117 2.448 47.33% 11.607 The patched (2) has smallest total time, since it has no cache hit overheads and less I/O block time(thanks to async readahead). Here the I/O size makes no much difference, since there's only one single stream. Note that (1-1)'s real I/O size is 64KB and (1-2)'s real I/O size is 128KB, since the half of the read-around pages will be readahead cache hits. This is going to make _real_ differences for _concurrent_ IO streams. Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Cc: Ying Han <yinghan@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
ef00e08e26
commit
70ac23cfa3
@ -1472,7 +1472,8 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
|
||||
if (VM_RandomReadHint(vma))
|
||||
return;
|
||||
|
||||
if (VM_SequentialReadHint(vma)) {
|
||||
if (VM_SequentialReadHint(vma) ||
|
||||
offset - 1 == (ra->prev_pos >> PAGE_CACHE_SHIFT)) {
|
||||
page_cache_sync_readahead(mapping, ra, file, offset, 1);
|
||||
return;
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user