riscv: mm: synchronize MMU after pte change
Because RISC-V compliant implementations can cache invalid entries in TLB, an SFENCE.VMA is necessary after changes to the page table. This patch adds an SFENCE.vma for the vmalloc_fault path. Signed-off-by: ShihPo Hung <shihpo.hung@sifive.com> [paul.walmsley@sifive.com: reversed tab->whitespace conversion, wrapped comment lines] Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: linux-riscv@lists.infradead.org Cc: stable@vger.kernel.org
This commit is contained in:
parent
c35f1b87fc
commit
bf587caae3
@ -29,6 +29,7 @@
|
||||
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
/*
|
||||
* This routine handles page faults. It determines the address and the
|
||||
@ -278,6 +279,18 @@ vmalloc_fault:
|
||||
pte_k = pte_offset_kernel(pmd_k, addr);
|
||||
if (!pte_present(*pte_k))
|
||||
goto no_context;
|
||||
|
||||
/*
|
||||
* The kernel assumes that TLBs don't cache invalid
|
||||
* entries, but in RISC-V, SFENCE.VMA specifies an
|
||||
* ordering constraint, not a cache flush; it is
|
||||
* necessary even after writing invalid entries.
|
||||
* Relying on flush_tlb_fix_spurious_fault would
|
||||
* suffice, but the extra traps reduce
|
||||
* performance. So, eagerly SFENCE.VMA.
|
||||
*/
|
||||
local_flush_tlb_page(addr);
|
||||
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user