forked from Minki/linux
block: don't check for BIO_MAX_PAGES in blk_bio_segment_split()
blk_bio_segment_split() makes sure bios have no more than BIO_MAX_PAGES entries in the bi_io_vec. This was done because bio_clone_bioset() (when given a mempool bioset) could not handle larger io_vecs. No driver uses bio_clone_bioset() any more, they all use bio_clone_fast() if anything, and bio_clone_fast() doesn't clone the bi_io_vec. The main user of of bio_clone_bioset() at this level is bounce.c, and bouncing now happens before blk_bio_segment_split(), so that is not of concern. So remove the big helpful comment and the code. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
This commit is contained in:
parent
9b10f6a9c2
commit
58c94cc19e
@ -108,24 +108,8 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
|
||||
bool do_split = true;
|
||||
struct bio *new = NULL;
|
||||
const unsigned max_sectors = get_max_io_size(q, bio);
|
||||
unsigned bvecs = 0;
|
||||
|
||||
bio_for_each_segment(bv, bio, iter) {
|
||||
/*
|
||||
* With arbitrary bio size, the incoming bio may be very
|
||||
* big. We have to split the bio into small bios so that
|
||||
* each holds at most BIO_MAX_PAGES bvecs because
|
||||
* bio_clone_bioset() can fail to allocate big bvecs.
|
||||
*
|
||||
* Those drivers which will need to use bio_clone_bioset()
|
||||
* should tell us in some way. For now, impose the
|
||||
* BIO_MAX_PAGES limit on all queues.
|
||||
*
|
||||
* TODO: handle users of bio_clone_bioset() differently.
|
||||
*/
|
||||
if (bvecs++ >= BIO_MAX_PAGES)
|
||||
goto split;
|
||||
|
||||
/*
|
||||
* If the queue doesn't support SG gaps and adding this
|
||||
* offset would create a gap, disallow it.
|
||||
|
Loading…
Reference in New Issue
Block a user