linux/kernel/bpf
Tatsuhiko Yasumatsu c4eb1f4032 bpf: Fix integer overflow involving bucket_size
In __htab_map_lookup_and_delete_batch(), hash buckets are iterated
over to count the number of elements in each bucket (bucket_size).
If bucket_size is large enough, the multiplication to calculate
kvmalloc() size could overflow, resulting in out-of-bounds write
as reported by KASAN:

  [...]
  [  104.986052] BUG: KASAN: vmalloc-out-of-bounds in __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.986489] Write of size 4194224 at addr ffffc9010503be70 by task crash/112
  [  104.986889]
  [  104.987193] CPU: 0 PID: 112 Comm: crash Not tainted 5.14.0-rc4 #13
  [  104.987552] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
  [  104.988104] Call Trace:
  [  104.988410]  dump_stack_lvl+0x34/0x44
  [  104.988706]  print_address_description.constprop.0+0x21/0x140
  [  104.988991]  ? __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.989327]  ? __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.989622]  kasan_report.cold+0x7f/0x11b
  [  104.989881]  ? __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.990239]  kasan_check_range+0x17c/0x1e0
  [  104.990467]  memcpy+0x39/0x60
  [  104.990670]  __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.990982]  ? __wake_up_common+0x4d/0x230
  [  104.991256]  ? htab_of_map_free+0x130/0x130
  [  104.991541]  bpf_map_do_batch+0x1fb/0x220
  [...]

In hashtable, if the elements' keys have the same jhash() value, the
elements will be put into the same bucket. By putting a lot of elements
into a single bucket, the value of bucket_size can be increased to
trigger the integer overflow.

Triggering the overflow is possible for both callers with CAP_SYS_ADMIN
and callers without CAP_SYS_ADMIN.

It will be trivial for a caller with CAP_SYS_ADMIN to intentionally
reach this overflow by enabling BPF_F_ZERO_SEED. As this flag will set
the random seed passed to jhash() to 0, it will be easy for the caller
to prepare keys which will be hashed into the same value, and thus put
all the elements into the same bucket.

If the caller does not have CAP_SYS_ADMIN, BPF_F_ZERO_SEED cannot be
used. However, it will be still technically possible to trigger the
overflow, by guessing the random seed value passed to jhash() (32bit)
and repeating the attempt to trigger the overflow. In this case,
the probability to trigger the overflow will be low and will take
a very long time.

Fix the integer overflow by calling kvmalloc_array() instead of
kvmalloc() to allocate memory.

Fixes: 057996380a ("bpf: Add batch ops to all htab bpf map")
Signed-off-by: Tatsuhiko Yasumatsu <th.yasumatsu@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210806150419.109658-1-th.yasumatsu@gmail.com
2021-08-07 01:39:22 +02:00
..
preload libbpf: Move BPF_SEQ_PRINTF and BPF_SNPRINTF to bpf_helpers.h 2021-05-26 10:45:41 -07:00
arraymap.c bpf: Add batched ops support for percpu array 2021-04-28 01:17:45 +02:00
bpf_inode_storage.c bpf: Fix spelling mistakes 2021-05-24 21:13:05 -07:00
bpf_iter.c bpf: Prepare bpf syscall to be used from kernel and user space. 2021-05-19 00:33:40 +02:00
bpf_local_storage.c bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete] 2021-02-26 11:51:48 -08:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-02-10 15:54:26 -08:00
bpf_lru_list.h bpf: Fix a typo "inacitve" -> "inactive" 2020-04-06 21:54:10 +02:00
bpf_lsm.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next 2021-06-17 11:54:56 -07:00
bpf_struct_ops_types.h bpf: tcp: Support tcp_congestion_ops in bpf 2020-01-09 08:46:18 -08:00
bpf_struct_ops.c bpf: Fix fexit trampoline. 2021-03-18 00:22:51 +01:00
bpf_task_storage.c bpf: Make symbol 'bpf_task_storage_busy' static 2021-03-16 12:24:20 -07:00
btf.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next 2021-06-17 11:54:56 -07:00
cgroup.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next 2021-02-16 13:14:06 -08:00
core.c bpf: Introduce BPF nospec instruction for mitigating Spectre v4 2021-07-29 00:20:56 +02:00
cpumap.c xdp: Add proper __rcu annotations to redirect map entries 2021-06-24 19:41:15 +02:00
devmap.c bpf, devmap: Convert remaining READ_ONCE() to rcu_dereference_check() 2021-07-01 09:28:38 +02:00
disasm.c bpf: Introduce BPF nospec instruction for mitigating Spectre v4 2021-07-29 00:20:56 +02:00
disasm.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 295 2019-06-05 17:36:38 +02:00
dispatcher.c bpf: Remove bpf_image tree 2020-03-13 12:49:52 -07:00
hashtab.c bpf: Fix integer overflow involving bucket_size 2021-08-07 01:39:22 +02:00
helpers.c bpf: Allow RCU-protected lookups to happen from bh context 2021-06-24 19:41:15 +02:00
inode.c bpf: Fix regression on BPF_OBJ_GET with non-O_RDWR flags 2021-06-22 14:57:43 +02:00
Kconfig bpf: Fix BPF_JIT kconfig symbol dependency 2021-05-20 23:48:37 +02:00
local_storage.c bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper 2021-03-25 18:31:36 -07:00
lpm_trie.c bpf: Allow RCU-protected lookups to happen from bh context 2021-06-24 19:41:15 +02:00
Makefile bpf: Enable task local storage for tracing programs 2021-02-26 11:51:47 -08:00
map_in_map.c bpf: Relax max_entries check for most of the inner map types 2020-08-28 15:41:30 +02:00
map_in_map.h bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
map_iter.c bpf: Implement link_query callbacks in map element iterators 2020-08-21 14:01:39 -07:00
net_namespace.c bpf: Add support for forced LINK_DETACH command 2020-08-01 20:38:28 -07:00
offload.c bpf, offload: Replace bitwise AND by logical AND in bpf_prog_offload_info_fill 2020-02-17 16:53:49 +01:00
percpu_freelist.c bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c bpf: Refactor bpf_iter_reg to have separate seq_info member 2020-07-25 20:16:32 -07:00
queue_stack_maps.c bpf: Eliminate rlimit-based memory accounting for queue_stack_maps maps 2020-12-02 18:32:46 -08:00
reuseport_array.c bpf: Fix spelling mistakes 2021-05-24 21:13:05 -07:00
ringbuf.c bpf: Fix false positive kmemleak report in bpf_ringbuf_area_alloc() 2021-06-28 15:57:46 +02:00
stackmap.c bpf: Refcount task stack in bpf_get_task_stack 2021-04-01 13:58:07 -07:00
syscall.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next 2021-06-17 11:54:56 -07:00
sysfs_btf.c bpf: Load and verify kernel module BTFs 2020-11-10 15:25:53 -08:00
task_iter.c bpf: Introduce task_vma bpf_iter 2021-02-12 12:56:53 -08:00
tnum.c bpf, tnums: Provably sound, faster, and more precise algorithm for tnum_mul 2021-06-01 13:34:15 +02:00
trampoline.c bpf: Fix spelling mistakes 2021-05-24 21:13:05 -07:00
verifier.c bpf: Fix leakage due to insufficient speculative store bypass mitigation 2021-07-29 00:27:52 +02:00