Fix:
../arch/x86/include/asm/proto.h:14:30: warning: ‘struct task_struct’ declared \
inside parameter list will not be visible outside of this definition or declaration
long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2);
^~~~~~~~~~~
.../arch/x86/include/asm/proto.h:40:34: warning: ‘struct task_struct’ declared \
inside parameter list will not be visible outside of this definition or declaration
long do_arch_prctl_common(struct task_struct *task, int option,
^~~~~~~~~~~
if linux/sched.h hasn't be included previously. This fixes a build error
when this header is used outside of the kernel tree.
[ bp: Massage commit message. ]
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/b76b4be3-cf66-f6b2-9a6c-3e7ef54f9845@web.de
Kernel mode NEON can be used in task or softirq context, but only in
a non-nesting manner, i.e., softirq context is only permitted if the
interrupt was not taken at a point where the kernel was using the NEON
in task context.
This means all users of kernel mode NEON have to be aware of this
limitation, and either need to provide scalar fallbacks that may be much
slower (up to 20x for AES instructions) and potentially less safe, or
use an asynchronous interface that defers processing to a later time
when the NEON is guaranteed to be available.
Given that grabbing and releasing the NEON is cheap, we can relax this
restriction, by increasing the granularity of kernel mode NEON code, and
always disabling softirq processing while the NEON is being used in task
context.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210302090118.30666-4-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The AArch64 asm syntax has this slightly tedious property that the names
used in mnemonics to refer to registers depend on whether the opcode in
question targets the entire 64-bits (xN), or only the least significant
8, 16 or 32 bits (wN). When writing parameterized code such as macros,
this can be annoying, as macro arguments don't lend themselves to
indexed lookups, and so generating a reference to wN in a macro that
receives xN as an argument is problematic.
For instance, an upcoming patch that modifies the implementation of the
cond_yield macro to be able to refer to 32-bit registers would need to
modify invocations such as
cond_yield 3f, x8
to
cond_yield 3f, 8
so that the second argument can be token pasted after x or w to emit the
correct register reference. Unfortunately, this interferes with the self
documenting nature of the first example, where the second argument is
obviously a register, whereas in the second example, one would need to
go and look at the code to find out what '8' means.
So let's fix this by defining wxN aliases for all xN registers, which
resolve to the 32-bit alias of each respective 64-bit register. This
allows the macro implementation to paste the xN reference after a w to
obtain the correct register name.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210302090118.30666-3-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The users of the conditional NEON yield macros have all been switched to
the simplified cond_yield macro, and so the NEON specific ones can be
removed.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210302090118.30666-2-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Trigger a warning if any of unwinder tests fail. This should help to
prevent quiet ignoring of test results when panic_on_warn is enabled.
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Handle the case of "unwind state reliable but addr is 0" like other error
cases in this function and trigger output of failing stacktrace to aid
debugging.
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Add arch_ prefix to all atomic operations, and define ARCH_ATOMIC.
This enables KASAN instrumentation for all atomic operations on s390.
This is the s390 variant of commit 8bf705d130 ("locking/atomic/x86:
Switch atomic.h to use atomic-instrumented.h").
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
s390 is the only architecture in the kernel which makes use of gcc's
atomic builtin functions. Even though I don't see any technical
problem with that right now, remove this code and open-code
compare-and-swap loops again, like every other architecture is doing
it also.
We can switch to a generic implementation when other architectures are
doing that also.
See also https://lwn.net/Articles/586838/ for forther details.
This basically reverts commit f318a1229b ("s390/cmpxchg: use
compiler builtins").
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
s390 is the only architecture in the kernel which makes use of gcc's
atomic builtin functions. Even though I don't see any technical
problem with that right now, remove this code and open-code
compare-and-swap loops again, like every other architecture is doing
it also.
We can switch to a generic implementation when other architectures are
doing that also.
See also https://lwn.net/Articles/586838/ for forther details.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Use the R,T, and S constraints instead of the Q constraint in atomic
inline assemblies wherever possible. This allows the compiler to
generate better code. (~ -2kb code size).
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Move all remaining inline assemblies from atomic.h to
atomic_ops.h. That way all atomic inline assemblies are
contained within only a single header file.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
The bitops code was optimized to generate test under mask instructions
with the __bitops_byte() helper. However that was many years ago and
in the meantime a lot of new instructions were introduced.
Changing the code so that it always operates on longs nowadays even
generates shorter code (~ -20kb, defconfig, gcc 10, march=zE12).
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Add conditional trap handlers similar to conditional system calls
(COND_SYSCALL), to reduce the number of ifdefs.
Trap handlers which may or may not exist depending on config options
are supposed to have a COND_TRAP entry, which redirects to
default_trap_handler() for non-existent trap handlers during link
time.
This allows to get rid of the secure execution trap handlers for the
!PGSTE case.
Reviewed-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Currently zpci_configure_device() can be called on a zPCI function in
two completely different states. Either the underlying zPCI function has
already been configured by the platform and we are only doing the
scanning to get it usable by Linux drivers. Or the underlying function
is in Standby and we first do an SCLP to get it configured. This makes
zpci_configure_device() harder to reason about. Since calling
zpci_configure_device() on a function in Standby only happens in
enable_slot() simply pull out the SCLP call and setting of zdev->state
and thus call zpci_configure_device() under the same circumstances as
in the event handling code.
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Now that the zbus can be created without being scanned we can go one
step further and make registering a device to a zbus independent from
scanning it. This way the zbus handling becomes much more natural
in that functions can be registered on the zbus to be scanned later more
closely resembling the handling of both real PCI hardware and other
virtual PCI busses like Hyper-V's virtual PCI bus (see for example
drivers/pci/controller/pci-hyperv.c:create_root_hv_pci_bus()).
Having zbus registration separate from scanning allows us to return
fully initialized but still disabled zdevs from zpci_create_device()
which can then be configured just as we would configure a zdev from
standby (minus the SCLP Configure already done by the platform). There
is still the exception that a PCI function with non-zero devfn can be
plugged before its PCI bus, which depends on the function with zero
devfn, is created. In this case the zdev returend from
zpci_create_device() is still missing its bus, hotplug slot, and
resources which need to be created later but at least it doesn't wait in
the enabled state and can otherwise be treated as initialized.
With this we also separate the initial PCI scan using CLP List PCI
Functions into two phases. In the CLP loop's callback we only register
each function with a virtual zbus creating the latter as needed. Then,
after we have built this virtual PCI topology based on our list of
zbusses, we can make use of the common code functionality to scan each
complete zbus as a separate child bus.
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Acked-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
In a later change we will first collect all PCI functions from the CLP
List PCI functions call, then register them to/creating the relevant
zbus. Then only after we've created our virtual bus structure will we
scan all zbusses iterating over the zbus list. Since scanning is
relatively slow a spinlock is a bad fit for protecting the
loop over the devices on the zbus. Furthermore doing the probing on the
bus we need to use pci_lock_rescan_remove() as devices are added to
the PCI subsystem and that is a mutex which can't be locked nested
inside a spinlock section. Note that the contention of this lock should
be very low either way as zbusses are only added/removed concurrently on
hotplug events.
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
In the existing code the creation of the PCI bus and the scanning of
function zero all happens in zpci_scan_bus(). This in turn requires
functions to be enabled and their resources to be available before the
PCI bus is even created.
This not only means that functions are enabled long before they are
actually made available to the common PCI subsystem. In case of
functions with non-zero devfn which appeared before the function with
devfn zero they can wait arbitrarily long in this enabled but not
scanned state.
Fix this by separating the creation of the PCI bus from scanning it and
only prepare, that is enable and setup MMIO bus resources, functions
just before they are scanned. As they may be scanned multiple times
track if we already created resources in the zdev.
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Acked-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Pull setting the maximum bus speed and multifunction attribute into
zpci_bus_scan() in preparation for handling bus creation separately
from scanning the bus.
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Acked-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
To match zpci_bus_scan_device() and the PCI common code terminology and
to remove some code duplication, we pull the multiple uses of
pci_scan_single_device() into a function. For now this has the side
effect of adding each device to the PCI bus separately and locking and
unlocking the rescan/remove lock for each instead of just once per bus.
This is clearly less efficient but provides a correct intermediate
behavior until a follow on change does both the adding and scanning only
once per bus.
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Acked-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Convert the program check table to C. Which allows to get rid of yet
another assembler file, and also enables proper type checking for the
table.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Use DECLARE_WAIT_QUEUE_HEAD to declare and statically initialize the
work_queue_head_t.
Signed-off-by: Vineeth Vijayan <vneethv@linux.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Fix to return a negative error code from the error handling
case instead of 0, as done elsewhere in this function.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Baisong Zhong <zhongbaisong@huawei.com>
Fixes: 37564ed834 ("s390/uv: add prot virt guest/host indication files")
Link: https://lore.kernel.org/r/2f7d62a4-3e75-b2b4-951b-75ef8ef59d16@huawei.com
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
* fixes:
s390/entry: save the caller of psw_idle
s390/entry: avoid setting up backchain in ext|io handlers
s390/setup: use memblock_free_late() to free old stack
s390/irq: fix reading of ext_params2 field from lowcore
s390/unwind: add machine check handler stack
s390/cpcmd: fix inline assembly register clobbering
MAINTAINERS: add backups for s390 vfio drivers
s390/vdso: fix initializing and updating of vdso_data
s390/vdso: fix tod_steering_delta type
s390/vdso: copy tod_steering_delta value to vdso_data page
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Currently psw_idle does not allocate a stack frame and does not
save its r14 and r15 into the save area. Even though this is valid from
call ABI point of view, because psw_idle does not make any calls
explicitly, in reality psw_idle is an entry point for controlled
transition into serving interrupts. So, in practice, psw_idle stack
frame is analyzed during stack unwinding. Depending on build options
that r14 slot in the save area of psw_idle might either contain a value
saved by previous sibling call or complete garbage.
[task 0000038000003c28] do_ext_irq+0xd6/0x160
[task 0000038000003c78] ext_int_handler+0xba/0xe8
[task *0000038000003dd8] psw_idle_exit+0x0/0x8 <-- pt_regs
([task 0000038000003dd8] 0x0)
[task 0000038000003e10] default_idle_call+0x42/0x148
[task 0000038000003e30] do_idle+0xce/0x160
[task 0000038000003e70] cpu_startup_entry+0x36/0x40
[task 0000038000003ea0] arch_call_rest_init+0x76/0x80
So, to make a stacktrace nicer and actually point for the real caller of
psw_idle in this frequently occurring case, make psw_idle save its r14.
[task 0000038000003c28] do_ext_irq+0xd6/0x160
[task 0000038000003c78] ext_int_handler+0xba/0xe8
[task *0000038000003dd8] psw_idle_exit+0x0/0x6 <-- pt_regs
([task 0000038000003dd8] arch_cpu_idle+0x3c/0xd0)
[task 0000038000003e10] default_idle_call+0x42/0x148
[task 0000038000003e30] do_idle+0xce/0x160
[task 0000038000003e70] cpu_startup_entry+0x36/0x40
[task 0000038000003ea0] arch_call_rest_init+0x76/0x80
Reviewed-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Currently when interrupt arrives to cpu while in kernel context
INT_HANDLER macro (used for ext_int_handler and io_int_handler)
allocates new stack frame and pt_regs on the kernel stack and
sets up the backchain to jump over the pt_regs to the frame which has
been interrupted. This is not ideal to two reasons:
1. This hides the fact that kernel stack contains interrupt frame in it
and hence breaks arch_stack_walk_reliable(), which needs to know that to
guarantee "reliability" and checks that there are no pt_regs on the way.
2. It breaks the backchain unwinder logic, which assumes that the next
stack frame after an interrupt frame is reliable, while it is not.
In some cases (when r14 contains garbage) this leads to early unwinding
termination with an error, instead of marking frame as unreliable
and continuing.
To address that, only set backchain to 0.
Fixes: 56e62a7370 ("s390: convert to generic entry")
Reviewed-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
The test in ovl_dentry_version_inc() was out-dated and did not include
the case where readdir cache is used on a non-merge dir that has origin
xattr, indicating that it may contain leftover whiteouts.
To make the code more robust, use the same helper ovl_dir_is_real()
to determine if readdir cache should be used and if readdir cache should
be invalidated.
Fixes: b79e05aaa1 ("ovl: no direct iteration for dir with origin xattr")
Link: https://lore.kernel.org/linux-unionfs/CAOQ4uxht70nODhNHNwGFMSqDyOKLXOKrY0H6g849os4BQ7cokA@mail.gmail.com/
Cc: Chris Murphy <lists@colorremedies.com>
Signed-off-by: Amir Goldstein <amir73il@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Commit 146d62e5a5 ("ovl: detect overlapping layers") made sure we don't
have overlapping layers, but it also broke the arguably valid use case of
mount -olowerdir=/,upperdir=/subdir,..
where upperdir overlaps lowerdir on the same filesystem. This has been
causing regressions.
Revert the check, but only for the specific case where upperdir and/or
workdir are subdirectories of lowerdir. Any other overlap (e.g. lowerdir
is subdirectory of upperdir, etc) case is crazy, so leave the check in
place for those.
Overlaps are detected at lookup time too, so reverting the mount time check
should be safe.
Fixes: 146d62e5a5 ("ovl: detect overlapping layers")
Cc: <stable@vger.kernel.org> # v5.2
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
This was missed when adding the option.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Reviewed-by: Vivek Goyal <vgoyal@redhat.com>
Fixes: 2d2f2d7322 ("ovl: user xattr")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
A typo is found out by codespell tool:
$ codespell ./fs/overlayfs/
./fs/overlayfs/util.c:217: dependig ==> depending
Fix a typo found by codespell.
Signed-off-by: Xiong Zhenwu <xiong.zhenwu@zte.com.cn>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
In ovl_xattr_set() we have already copied attr of real inode
so no need to copy it again in ovl_posix_acl_xattr_set().
Signed-off-by: Chengguang Xu <cgxu519@mykernel.net>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
There are some places should return -EINVAL instead of -ENOMEM in
ovl_fill_super().
[Amir] Consistently set error before checking the error condition.
Signed-off-by: Chengguang Xu <cgxu519@mykernel.net>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Smatch complains about missing that the ovl_override_creds() doesn't
have a matching revert_creds() if the dentry is disconnected. Fix this
by moving the ovl_override_creds() until after the disconnected check.
Fixes: aa3ff3c152 ("ovl: copy up of disconnected dentries")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Since commit 6815f479ca ("ovl: use only uppermetacopy state in
ovl_lookup()"), overlayfs doesn't put temporary dentry when there is a
metacopy error, which leads to dentry leaks when shutting down the related
superblock:
overlayfs: refusing to follow metacopy origin for (/file0)
...
BUG: Dentry (____ptrval____){i=3f33,n=file3} still in use (1) [unmount of overlay overlay]
...
WARNING: CPU: 1 PID: 432 at umount_check.cold+0x107/0x14d
CPU: 1 PID: 432 Comm: unmount-overlay Not tainted 5.12.0-rc5 #1
...
RIP: 0010:umount_check.cold+0x107/0x14d
...
Call Trace:
d_walk+0x28c/0x950
? dentry_lru_isolate+0x2b0/0x2b0
? __kasan_slab_free+0x12/0x20
do_one_tree+0x33/0x60
shrink_dcache_for_umount+0x78/0x1d0
generic_shutdown_super+0x70/0x440
kill_anon_super+0x3e/0x70
deactivate_locked_super+0xc4/0x160
deactivate_super+0xfa/0x140
cleanup_mnt+0x22e/0x370
__cleanup_mnt+0x1a/0x30
task_work_run+0x139/0x210
do_exit+0xb0c/0x2820
? __kasan_check_read+0x1d/0x30
? find_held_lock+0x35/0x160
? lock_release+0x1b6/0x660
? mm_update_next_owner+0xa20/0xa20
? reacquire_held_locks+0x3f0/0x3f0
? __sanitizer_cov_trace_const_cmp4+0x22/0x30
do_group_exit+0x135/0x380
__do_sys_exit_group.isra.0+0x20/0x20
__x64_sys_exit_group+0x3c/0x50
do_syscall_64+0x45/0x70
entry_SYSCALL_64_after_hwframe+0x44/0xae
...
VFS: Busy inodes after unmount of overlay. Self-destruct in 5 seconds. Have a nice day...
This fix has been tested with a syzkaller reproducer.
Cc: Amir Goldstein <amir73il@gmail.com>
Cc: <stable@vger.kernel.org> # v5.8+
Reported-by: syzbot <syzkaller@googlegroups.com>
Fixes: 6815f479ca ("ovl: use only uppermetacopy state in ovl_lookup()")
Signed-off-by: Mickaël Salaün <mic@linux.microsoft.com>
Link: https://lore.kernel.org/r/20210329164907.2133175-1-mic@digikod.net
Reviewed-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Commit a888db3101 ("ovl: fix regression with re-formatted lower
squashfs") attempted to fix a regression with existing setups that
use a practice that we are trying to discourage.
The discourage part was described this way in the commit message:
"To avoid the reported regression while still allowing the new features
with single lower squashfs, do not allow decoding origin with lower null
uuid unless user opted-in to one of the new features that require
following the lower inode of non-dir upper (index, xino, metacopy)."
The three mentioned features are disabled by default in Kconfig, so
it was assumed that if they are enabled, the user opted-in for them.
Apparently, distros started to configure CONFIG_OVERLAY_FS_XINO_AUTO=y
some time ago, so users upgrading their kernels can still be affected
by said regression even though they never opted-in for any new feature.
To fix this, treat "xino=on" as "user opted-in", but not "xino=auto".
Since we are changing the behavior of "xino=auto" to no longer follow
to lower origin with null uuid, take this one step further and disable
xino in that corner case. To be consistent, disable xino also in cases
of lower fs without file handle support and upper fs without xattr
support.
Update documentation w.r.t the new "xino=auto" behavior and fix the out
dated bits of documentation regarding "xino" and regarding offline
modifications to lower layers.
Link: https://lore.kernel.org/linux-unionfs/b36a429d7c563730c28d763d4d57a6fc30508a4f.1615216996.git.kevin@kevinlocke.name/
Signed-off-by: Amir Goldstein <amir73il@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
So far we only checked that sb is not read-only.
Suggested-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Amir Goldstein <amir73il@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Filesystems can implement their own flush method that release
resources, or manipulate caches. Currently if one of these
filesystems is used with overlayfs, the flush method is not called.
[Amir: fix fd leak in ovl_flush()]
Signed-off-by: Sargun Dhillon <sargun@sargun.me>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Add a debug printk to dump the GPIO configuration stored in EEPROM
during probe.
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Use the new GPIO valid-mask feature to inform gpiolib which pins are
available for use instead of handling that in a request callback.
This also allows user space to figure out which pins are available
through the chardev interface without having to request each pin in
turn.
Note that the return value when requesting an unavailable pin will now
be -EINVAL instead of -ENODEV.
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
When the superuser flushes the entire cache, the mmap_read_lock() is not
taken, but mmap_read_unlock() is called. Add the missing
mmap_read_lock() call.
Fixes: cd2567b685 ("m68k: call find_vma with the mmap_sem held in sys_cacheflush()")
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Link: https://lore.kernel.org/r/20210407200032.764445-1-Liam.Howlett@Oracle.com
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>