mirror of
https://github.com/torvalds/linux.git
synced 2024-11-11 22:51:42 +00:00
Btrfs: Don't try to compress pages past i_size
The compression code had some checks to make sure we were only compressing bytes inside of i_size, but it wasn't catching every case. To make things worse, some incorrect math about the number of bytes remaining would make it try to compress more pages than the file really had. The fix used here is to fall back to the non-compression code in this case, which does all the proper cleanup of delalloc and other accounting. Signed-off-by: Chris Mason <chris.mason@oracle.com>
This commit is contained in:
parent
811449496b
commit
f03d9301f1
@ -360,6 +360,19 @@ again:
|
||||
nr_pages = (end >> PAGE_CACHE_SHIFT) - (start >> PAGE_CACHE_SHIFT) + 1;
|
||||
nr_pages = min(nr_pages, (128 * 1024UL) / PAGE_CACHE_SIZE);
|
||||
|
||||
/*
|
||||
* we don't want to send crud past the end of i_size through
|
||||
* compression, that's just a waste of CPU time. So, if the
|
||||
* end of the file is before the start of our current
|
||||
* requested range of bytes, we bail out to the uncompressed
|
||||
* cleanup code that can deal with all of this.
|
||||
*
|
||||
* It isn't really the fastest way to fix things, but this is a
|
||||
* very uncommon corner.
|
||||
*/
|
||||
if (actual_end <= start)
|
||||
goto cleanup_and_bail_uncompressed;
|
||||
|
||||
total_compressed = actual_end - start;
|
||||
|
||||
/* we want to make sure that amount of ram required to uncompress
|
||||
@ -504,6 +517,7 @@ again:
|
||||
goto again;
|
||||
}
|
||||
} else {
|
||||
cleanup_and_bail_uncompressed:
|
||||
/*
|
||||
* No compression, but we still need to write the pages in
|
||||
* the file we've been given so far. redirty the locked
|
||||
|
Loading…
Reference in New Issue
Block a user