Files
linux/arch/arm64/include/asm/page.h

56 lines
1.4 KiB
C
Raw Normal View History

/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Based on arch/arm/include/asm/page.h
*
* Copyright (C) 1995-2003 Russell King
* Copyright (C) 2012 ARM Ltd.
*/
#ifndef __ASM_PAGE_H
#define __ASM_PAGE_H
#include <asm/page-def.h>
#ifndef __ASSEMBLY__
#include <linux/personality.h> /* for READ_IMPLIES_EXEC */
#include <linux/types.h> /* for gfp_t */
#include <asm/pgtable-types.h>
struct page;
struct vm_area_struct;
extern void copy_page(void *to, const void *from);
extern void clear_page(void *to);
void copy_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma);
#define __HAVE_ARCH_COPY_USER_HIGHPAGE
void copy_highpage(struct page *to, struct page *from);
#define __HAVE_ARCH_COPY_HIGHPAGE
struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
unsigned long vaddr);
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
void tag_clear_highpage(struct page *to);
#define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
typedef struct page *pgtable_t;
Partially revert "arm64/mm: drop HAVE_ARCH_PFN_VALID" This partially reverts commit 16c9afc776608324ca71c0bc354987bab532f51d. Alex Bee reports a regression in 5.14 on their RK3328 SoC when configuring the PL330 DMA controller: | ------------[ cut here ]------------ | WARNING: CPU: 2 PID: 373 at kernel/dma/mapping.c:235 dma_map_resource+0x68/0xc0 | Modules linked in: spi_rockchip(+) fuse | CPU: 2 PID: 373 Comm: systemd-udevd Not tainted 5.14.0-rc7 #1 | Hardware name: Pine64 Rock64 (DT) | pstate: 80000005 (Nzcv daif -PAN -UAO -TCO BTYPE=--) | pc : dma_map_resource+0x68/0xc0 | lr : pl330_prep_slave_fifo+0x78/0xd0 This appears to be because dma_map_resource() is being called for a physical address which does not correspond to a memory address yet does have a valid 'struct page' due to the way in which the vmemmap is constructed. Prior to 16c9afc77660 ("arm64/mm: drop HAVE_ARCH_PFN_VALID"), the arm64 implementation of pfn_valid() called memblock_is_memory() to return 'false' for such regions and the DMA mapping request would proceed. However, now that we are using the generic implementation where only the presence of the memory map entry is considered, we return 'true' and erroneously fail with DMA_MAPPING_ERROR because we identify the region as DRAM. Although fixing this in the DMA mapping code is arguably the right fix, it is a risky, cross-architecture change at this stage in the cycle. So just revert arm64 back to its old pfn_valid() implementation for v5.14. The change to the generic pfn_valid() code is preserved from the original patch, so as to avoid impacting other architectures. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Christoph Hellwig <hch@lst.de> Reported-by: Alex Bee <knaerzche@gmail.com> Link: https://lore.kernel.org/r/d3a3c828-b777-faf8-e901-904995688437@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2021-08-25 11:10:07 +01:00
int pfn_valid(unsigned long pfn);
int pfn_is_map_memory(unsigned long pfn);
#include <asm/memory.h>
#endif /* !__ASSEMBLY__ */
arm64: mte: Add PROT_MTE support to mmap() and mprotect() To enable tagging on a memory range, the user must explicitly opt in via a new PROT_MTE flag passed to mmap() or mprotect(). Since this is a new memory type in the AttrIndx field of a pte, simplify the or'ing of these bits over the protection_map[] attributes by making MT_NORMAL index 0. There are two conditions for arch_vm_get_page_prot() to return the MT_NORMAL_TAGGED memory type: (1) the user requested it via PROT_MTE, registered as VM_MTE in the vm_flags, and (2) the vma supports MTE, decided during the mmap() call (only) and registered as VM_MTE_ALLOWED. arch_calc_vm_prot_bits() is responsible for registering the user request as VM_MTE. The newly introduced arch_calc_vm_flag_bits() sets VM_MTE_ALLOWED if the mapping is MAP_ANONYMOUS. An MTE-capable filesystem (RAM-based) may be able to set VM_MTE_ALLOWED during its mmap() file ops call. In addition, update VM_DATA_DEFAULT_FLAGS to allow mprotect(PROT_MTE) on stack or brk area. The Linux mmap() syscall currently ignores unknown PROT_* flags. In the presence of MTE, an mmap(PROT_MTE) on a file which does not support MTE will not report an error and the memory will not be mapped as Normal Tagged. For consistency, mprotect(PROT_MTE) will not report an error either if the memory range does not support MTE. Two subsequent patches in the series will propose tightening of this behaviour. Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
2019-11-27 10:00:27 +00:00
#define VM_DATA_DEFAULT_FLAGS (VM_DATA_FLAGS_TSK_EXEC | VM_MTE_ALLOWED)
#include <asm-generic/getorder.h>
#endif