mm: SLAB hardened usercopy support

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLAB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Valdis Kletnieks <valdis.kletnieks@vt.edu>
This commit is contained in:
Kees Cook 2016-06-23 15:20:59 -07:00
parent 97433ea4fd
commit 04385fc5e8
2 changed files with 31 additions and 0 deletions

View File

@ -1758,6 +1758,7 @@ choice
config SLAB config SLAB
bool "SLAB" bool "SLAB"
select HAVE_HARDENED_USERCOPY_ALLOCATOR
help help
The regular slab allocator that is established and known to work The regular slab allocator that is established and known to work
well in all environments. It organizes cache hot objects in well in all environments. It organizes cache hot objects in

View File

@ -4477,6 +4477,36 @@ static int __init slab_proc_init(void)
module_init(slab_proc_init); module_init(slab_proc_init);
#endif #endif
#ifdef CONFIG_HARDENED_USERCOPY
/*
* Rejects objects that are incorrectly sized.
*
* Returns NULL if check passes, otherwise const char * to name of cache
* to indicate an error.
*/
const char *__check_heap_object(const void *ptr, unsigned long n,
struct page *page)
{
struct kmem_cache *cachep;
unsigned int objnr;
unsigned long offset;
/* Find and validate object. */
cachep = page->slab_cache;
objnr = obj_to_index(cachep, page, (void *)ptr);
BUG_ON(objnr >= cachep->num);
/* Find offset within object. */
offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
/* Allow address range falling entirely within object size. */
if (offset <= cachep->object_size && n <= cachep->object_size - offset)
return NULL;
return cachep->name;
}
#endif /* CONFIG_HARDENED_USERCOPY */
/** /**
* ksize - get the actual amount of memory allocated for a given object * ksize - get the actual amount of memory allocated for a given object
* @objp: Pointer to the object * @objp: Pointer to the object