forked from Minki/linux
59298997df
The check_object_size() helper under CONFIG_HARDENED_USERCOPY is designed
to skip any checks where the length is known at compile time as a
reasonable heuristic to avoid "likely known-good" cases. However, it can
only do this when the copy_*_user() helpers are, themselves, inline too.
Using find_vmap_area() requires taking a spinlock. The
check_object_size() helper can call find_vmap_area() when the destination
is in vmap memory. If show_regs() is called in interrupt context, it will
attempt a call to copy_from_user_nmi(), which may call check_object_size()
and then find_vmap_area(). If something in normal context happens to be
in the middle of calling find_vmap_area() (with the spinlock held), the
interrupt handler will hang forever.
The copy_from_user_nmi() call is actually being called with a fixed-size
length, so check_object_size() should never have been called in the first
place. Given the narrow constraints, just replace the
__copy_from_user_inatomic() call with an open-coded version that calls
only into the sanitizers and not check_object_size(), followed by a call
to raw_copy_from_user().
[akpm@linux-foundation.org: no instrument_copy_from_user() in my tree...]
Link: https://lkml.kernel.org/r/20220919201648.2250764-1-keescook@chromium.org
Link: https://lore.kernel.org/all/CAOUHufaPshtKrTWOz7T7QFYUNVGFm0JBjvM700Nhf9qEL9b3EQ@mail.gmail.com
Fixes: 0aef499f31
("mm/usercopy: Detect vmalloc overruns")
Signed-off-by: Kees Cook <keescook@chromium.org>
Reported-by: Yu Zhao <yuzhao@google.com>
Reported-by: Florian Lehner <dev@der-flo.net>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Florian Lehner <dev@der-flo.net>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
53 lines
1.4 KiB
C
53 lines
1.4 KiB
C
/*
|
|
* User address space access functions.
|
|
*
|
|
* For licencing details see kernel-base/COPYING
|
|
*/
|
|
|
|
#include <linux/uaccess.h>
|
|
#include <linux/export.h>
|
|
|
|
#include <asm/tlbflush.h>
|
|
|
|
/**
|
|
* copy_from_user_nmi - NMI safe copy from user
|
|
* @to: Pointer to the destination buffer
|
|
* @from: Pointer to a user space address of the current task
|
|
* @n: Number of bytes to copy
|
|
*
|
|
* Returns: The number of not copied bytes. 0 is success, i.e. all bytes copied
|
|
*
|
|
* Contrary to other copy_from_user() variants this function can be called
|
|
* from NMI context. Despite the name it is not restricted to be called
|
|
* from NMI context. It is safe to be called from any other context as
|
|
* well. It disables pagefaults across the copy which means a fault will
|
|
* abort the copy.
|
|
*
|
|
* For NMI context invocations this relies on the nested NMI work to allow
|
|
* atomic faults from the NMI path; the nested NMI paths are careful to
|
|
* preserve CR2.
|
|
*/
|
|
unsigned long
|
|
copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
|
|
{
|
|
unsigned long ret;
|
|
|
|
if (!__access_ok(from, n))
|
|
return n;
|
|
|
|
if (!nmi_uaccess_okay())
|
|
return n;
|
|
|
|
/*
|
|
* Even though this function is typically called from NMI/IRQ context
|
|
* disable pagefaults so that its behaviour is consistent even when
|
|
* called from other contexts.
|
|
*/
|
|
pagefault_disable();
|
|
ret = raw_copy_from_user(to, from, n);
|
|
pagefault_enable();
|
|
|
|
return ret;
|
|
}
|
|
EXPORT_SYMBOL_GPL(copy_from_user_nmi);
|