forked from Minki/linux
mmu_gather: fix over-eager tlb_flush_mmu_free() calling
Dave Hansen reports that commit fb7332a9fe
("mmu_gather: move minimal
range calculations into generic code") caused a performance problem:
"tlb_finish_mmu() goes up about 9x in the profiles (~0.4%->3.6%) and
tlb_flush_mmu_free() takes about 3.1% of CPU time with the patch
applied, but does not show up at all on the commit before"
and the reason is that Will moved the test for whether we need to flush
from tlb_flush_mmu() into tlb_flush_mmu_tlbonly(). But that meant that
tlb_flush_mmu_free() basically lost that check.
Move it back into tlb_flush_mmu() where it belongs, so that it covers
both tlb_flush_mmu_tlbonly() _and_ tlb_flush_mmu_free().
Reported-and-tested-by: Dave Hansen <dave@sr71.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
cf3c0a1579
commit
f045bbb9fa
@ -235,9 +235,6 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long
|
||||
|
||||
static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
|
||||
{
|
||||
if (!tlb->end)
|
||||
return;
|
||||
|
||||
tlb_flush(tlb);
|
||||
mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
|
||||
#ifdef CONFIG_HAVE_RCU_TABLE_FREE
|
||||
@ -259,6 +256,9 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
|
||||
|
||||
void tlb_flush_mmu(struct mmu_gather *tlb)
|
||||
{
|
||||
if (!tlb->end)
|
||||
return;
|
||||
|
||||
tlb_flush_mmu_tlbonly(tlb);
|
||||
tlb_flush_mmu_free(tlb);
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user