Commit Graph

798453 Commits

Author SHA1 Message Date
Martin KaFai Lau
f0187f0b17 bpf: libbpf: Refactor and bug fix on the bpf_func_info loading logic
This patch refactor and fix a bug in the libbpf's bpf_func_info loading
logic.  The bug fix and refactoring are targeting the same
commit 2993e0515b ("tools/bpf: add support to read .BTF.ext sections")
which is in the bpf-next branch.

1) In bpf_load_program_xattr(), it should retry when errno == E2BIG
   regardless of log_buf and log_buf_sz.  This patch fixes it.

2) btf_ext__reloc_init() and btf_ext__reloc() are essentially
   the same except btf_ext__reloc_init() always has insns_cnt == 0.
   Hence, btf_ext__reloc_init() is removed.

   btf_ext__reloc() is also renamed to btf_ext__reloc_func_info()
   to get ready for the line_info support in the next patch.

3) Consolidate func_info section logic from "btf_ext_parse_hdr()",
   "btf_ext_validate_func_info()" and "btf_ext__new()" to
   a new function "btf_ext_copy_func_info()" such that similar
   logic can be reused by the later libbpf's line_info patch.

4) The next line_info patch will store line_info_cnt instead of
   line_info_len in the bpf_program because the kernel is taking
   line_info_cnt also.  It will save a few "len" to "cnt" conversions
   and will also save some function args.

   Hence, this patch also makes bpf_program to store func_info_cnt
   instead of func_info_len.

5) btf_ext depends on btf.  e.g. the func_info's type_id
   in ".BTF.ext" is not useful when ".BTF" is absent.
   This patch only init the obj->btf_ext pointer after
   it has successfully init the obj->btf pointer.

   This can avoid always checking "obj->btf && obj->btf_ext"
   together for accessing ".BTF.ext".  Checking "obj->btf_ext"
   alone will do.

6) Move "struct btf_sec_func_info" from btf.h to btf.c.
   There is no external usage outside btf.c.

Fixes: 2993e0515b ("tools/bpf: add support to read .BTF.ext sections")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-09 13:54:38 -08:00
Martin KaFai Lau
4d6304c763 bpf: Add unit tests for bpf_line_info
Add unit tests for bpf_line_info for both BPF_PROG_LOAD and
BPF_OBJ_GET_INFO_BY_FD.

jit enabled:
[root@arch-fb-vm1 bpf]# ./test_btf -k 0
BTF prog info raw test[5] (line_info (No subprog)): OK
BTF prog info raw test[6] (line_info (No subprog. insn_off >= prog->len)): OK
BTF prog info raw test[7] (line_info (No subprog. zero tailing line_info): OK
BTF prog info raw test[8] (line_info (No subprog. nonzero tailing line_info)): OK
BTF prog info raw test[9] (line_info (subprog)): OK
BTF prog info raw test[10] (line_info (subprog + func_info)): OK
BTF prog info raw test[11] (line_info (subprog. missing 1st func line info)): OK
BTF prog info raw test[12] (line_info (subprog. missing 2nd func line info)): OK
BTF prog info raw test[13] (line_info (subprog. unordered insn offset)): OK

jit disabled:
BTF prog info raw test[5] (line_info (No subprog)): not jited. skipping jited_line_info check. OK
BTF prog info raw test[6] (line_info (No subprog. insn_off >= prog->len)): OK
BTF prog info raw test[7] (line_info (No subprog. zero tailing line_info): not jited. skipping jited_line_info check. OK
BTF prog info raw test[8] (line_info (No subprog. nonzero tailing line_info)): OK
BTF prog info raw test[9] (line_info (subprog)): not jited. skipping jited_line_info check. OK
BTF prog info raw test[10] (line_info (subprog + func_info)): not jited. skipping jited_line_info check. OK
BTF prog info raw test[11] (line_info (subprog. missing 1st func line info)): OK
BTF prog info raw test[12] (line_info (subprog. missing 2nd func line info)): OK
BTF prog info raw test[13] (line_info (subprog. unordered insn offset)): OK

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-09 13:54:38 -08:00
Martin KaFai Lau
05687352c6 bpf: Refactor and bug fix in test_func_type in test_btf.c
1) bpf_load_program_xattr() is absorbing the EBIG error
   which makes testing this case impossible.  It is replaced
   with a direct syscall(__NR_bpf, BPF_PROG_LOAD,...).
2) The test_func_type() is renamed to test_info_raw() to
   prepare for the new line_info test in the next patch.
3) The bpf_obj_get_info_by_fd() testing for func_info
   is refactored to test_get_finfo().  A new
   test_get_linfo() will be added in the next patch
   for testing line_info purpose.
4) The test->func_info_cnt is checked instead of
   a static value "2".
5) Remove unnecessary "\n" in error message.
6) Adding back info_raw_test_num to the cmd arg such
   that a specific test case can be tested, like
   all other existing tests.

7) Fix a bug in handling expected_prog_load_failure.
   A test could pass even if prog_fd != -1 while
   expected_prog_load_failure is true.
8) The min rec_size check should be < 8 instead of < 4.

Fixes: 4798c4ba3b ("tools/bpf: extends test_btf to test load/retrieve func_type info")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-09 13:54:38 -08:00
Martin KaFai Lau
ee491d8dbe bpf: tools: Sync uapi bpf.h
Sync uapi bpf.h to tools/include/uapi/linux for
the new bpf_line_info.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-09 13:54:38 -08:00
Martin KaFai Lau
c454a46b5e bpf: Add bpf_line_info support
This patch adds bpf_line_info support.

It accepts an array of bpf_line_info objects during BPF_PROG_LOAD.
The "line_info", "line_info_cnt" and "line_info_rec_size" are added
to the "union bpf_attr".  The "line_info_rec_size" makes
bpf_line_info extensible in the future.

The new "check_btf_line()" ensures the userspace line_info is valid
for the kernel to use.

When the verifier is translating/patching the bpf_prog (through
"bpf_patch_insn_single()"), the line_infos' insn_off is also
adjusted by the newly added "bpf_adj_linfo()".

If the bpf_prog is jited, this patch also provides the jited addrs (in
aux->jited_linfo) for the corresponding line_info.insn_off.
"bpf_prog_fill_jited_linfo()" is added to fill the aux->jited_linfo.
It is currently called by the x86 jit.  Other jits can also use
"bpf_prog_fill_jited_linfo()" and it will be done in the followup patches.
In the future, if it deemed necessary, a particular jit could also provide
its own "bpf_prog_fill_jited_linfo()" implementation.

A few "*line_info*" fields are added to the bpf_prog_info such
that the user can get the xlated line_info back (i.e. the line_info
with its insn_off reflecting the translated prog).  The jited_line_info
is available if the prog is jited.  It is an array of __u64.
If the prog is not jited, jited_line_info_cnt is 0.

The verifier's verbose log with line_info will be done in
a follow up patch.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-09 13:54:38 -08:00
Alexei Starovoitov
6baefa1aa4 Merge branch 'support-alu32_arsh'
Jiong Wang says:

====================
BPF_ALU | BPF_ARSH | BPF_* were rejected by commit: 7891a87efc
("bpf: arsh is not supported in 32 bit alu thus reject it"). As explained
in the commit message, this is due to there is no complete support for them
on interpreter and various JIT compilation back-ends.

This patch set is a follow-up which completes the missing bits. This also
pave the way for running bpf program compiled with ALU32 instruction
enabled by specifing -mattr=+alu32 to LLVM for which case there is likely
to have more BPF_ALU | BPF_ARSH insns that will trigger the rejection code.

test_verifier.c is updated accordingly.

I have tested this patch set on x86-64 and NFP, I need help of review and
test on the arch changes (mips/ppc/s390).

Note, there might be merge confict on mips change which is better to be
applied on top of:

  commit: 20b880a05f06 ("mips: bpf: fix encoding bug for mm_srlv32_op"),

which is on mips-fixes branch at the moment.

Thanks.

v1->v2:
 - Fix ppc implementation bug. Should zero high bits explicitly.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:49 -08:00
Jiong Wang
c099f3f413 selftests: bpf: update testcases for BPF_ALU | BPF_ARSH
"arsh32 on imm" and "arsh32 on reg" now are accepted. Also added two new
testcases to make sure arsh32 won't be treated as arsh64 during
interpretation or JIT code-gen for which case the high bits will be moved
into low halve that the testcases could catch them.

Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
c49f7dbd4f bpf: verifier remove the rejection on BPF_ALU | BPF_ARSH
This patch remove the rejection on BPF_ALU | BPF_ARSH as we have supported
them on interpreter and all JIT back-ends

Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
2dc6b100f9 bpf: interpreter support BPF_ALU | BPF_ARSH
This patch implements interpreting BPF_ALU | BPF_ARSH. Do arithmetic right
shift on low 32-bit sub-register, and zero the high 32 bits.

Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
84708c1386 nfp: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_*
BPF_X support needs indirect shift mode, please see code comments for
details.

Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
f860203b01 s390: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_*
This patch implements code-gen for BPF_ALU | BPF_ARSH | BPF_*.

Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
44cf43c04b ppc: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_*
This patch implements code-gen for BPF_ALU | BPF_ARSH | BPF_*.

Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
Cc: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
ee94b90c8a mips: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_X
Jitting of BPF_K is supported already, but not BPF_X. This patch complete
the support for the latter on both MIPS and microMIPS.

Cc: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Acked-by: Paul Burton <paul.burton@mips.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
17f6c83fb5 mips: bpf: fix encoding bug for mm_srlv32_op
For micro-mips, srlv inside POOL32A encoding space should use 0x50
sub-opcode, NOT 0x90.

Some early version ISA doc describes the encoding as 0x90 for both srlv and
srav, this looks to me was a typo. I checked Binutils libopcode
implementation which is using 0x50 for srlv and 0x90 for srav.

v1->v2:
  - Keep mm_srlv32_op sorted by value.

Fixes: f31318fdf3 ("MIPS: uasm: Add srlv uasm instruction")
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:29:48 -08:00
Alexei Starovoitov
a06aef4ed5 Merge branch 'bpf_func_info-improvements'
Martin KaFai Lau says:

====================
The patchset has a few improvements on bpf_func_info:
1. Improvements on the behaviors of info.func_info, info.func_info_cnt
   and info.func_info_rec_size.
2. Name change: s/insn_offset/insn_off/

Please see individual patch for details.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05 18:48:41 -08:00
Martin KaFai Lau
84ecc1f98c bpf: Expect !info.func_info and insn_off name changes in test_btf/libbpf/bpftool
Similar to info.jited_*, info.func_info could be 0 if
bpf_dump_raw_ok() == false.

This patch makes changes to test_btf and bpftool to expect info.func_info
could be 0.

This patch also makes the needed changes for s/insn_offset/insn_off/.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05 18:48:40 -08:00
Martin KaFai Lau
555249df71 bpf: tools: Sync uapi bpf.h for the name changes in bpf_func_info
This patch sync the name changes in bpf_func_info to
the tools/.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05 18:48:40 -08:00
Martin KaFai Lau
d30d42e08c bpf: Change insn_offset to insn_off in bpf_func_info
The later patch will introduce "struct bpf_line_info" which
has member "line_off" and "file_off" referring back to the
string section in btf.  The line_"off" and file_"off"
are more consistent to the naming convention in btf.h that
means "offset" (e.g. name_off in "struct btf_type").

The to-be-added "struct bpf_line_info" also has another
member, "insn_off" which is the same as the "insn_offset"
in "struct bpf_func_info".  Hence, this patch renames "insn_offset"
to "insn_off" for "struct bpf_func_info".

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05 18:48:40 -08:00
Martin KaFai Lau
7337224fc1 bpf: Improve the info.func_info and info.func_info_rec_size behavior
1) When bpf_dump_raw_ok() == false and the kernel can provide >=1
   func_info to the userspace, the current behavior is setting
   the info.func_info_cnt to 0 instead of setting info.func_info
   to 0.

   It is different from the behavior in jited_func_lens/nr_jited_func_lens,
   jited_ksyms/nr_jited_ksyms...etc.

   This patch fixes it. (i.e. set func_info to 0 instead of
   func_info_cnt to 0 when bpf_dump_raw_ok() == false).

2) When the userspace passed in info.func_info_cnt == 0, the kernel
   will set the expected func_info size back to the
   info.func_info_rec_size.  It is a way for the userspace to learn
   the kernel expected func_info_rec_size introduced in
   commit 838e96904f ("bpf: Introduce bpf_func_info").

   An exception is the kernel expected size is not set when
   func_info is not available for a bpf_prog.  This makes the
   returned info.func_info_rec_size has different values
   depending on the returned value of info.func_info_cnt.

   This patch sets the kernel expected size to info.func_info_rec_size
   independent of the info.func_info_cnt.

3) The current logic only rejects invalid func_info_rec_size if
   func_info_cnt is non zero.  This patch also rejects invalid
   nonzero info.func_info_rec_size and not equal to the kernel
   expected size.

4) Set info.btf_id as long as prog->aux->btf != NULL.  That will
   setup the later copy_to_user() codes look the same as others
   which then easier to understand and maintain.

   prog->aux->btf is not NULL only if prog->aux->func_info_cnt > 0.

   Breaking up info.btf_id from prog->aux->func_info_cnt is needed
   for the later line info patch anyway.

   A similar change is made to bpf_get_prog_name().

Fixes: 838e96904f ("bpf: Introduce bpf_func_info")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-05 18:48:40 -08:00
Quentin Monnet
30da46b5dc tools: bpftool: add a command to dump the trace pipe
BPF programs can use the bpf_trace_printk() helper to print debug
information into the trace pipe. Add a subcommand
"bpftool prog tracelog" to simply dump this pipe to the console.

This is for a good part copied from iproute2, where the feature is
available with "tc exec bpf dbg". Changes include dumping pipe content
to stdout instead of stderr and adding JSON support (content is dumped
as an array of strings, one per line read from the pipe). This version
is dual-licensed, with Daniel's permission.

Cc: Daniel Borkmann <daniel@iogearbox.net>
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-05 16:41:52 +01:00
Daniel Borkmann
41888179b1 Merge branch 'bpf-jit-overridable-alloc'
Ard Biesheuvel says:

====================
On arm64, modules are allocated from a 128 MB window which is close to
the core kernel, so that relative direct branches are guaranteed to be
in range (except in some KASLR configurations). Also, module_alloc()
is in charge of allocating KASAN shadow memory when running with KASAN
enabled.

This means that the way BPF reuses module_alloc()/module_memfree() is
undesirable on arm64 (and potentially other architectures as well),
and so this series refactors BPF's use of those functions to permit
architectures to change this behavior.

Patch #1 breaks out the module_alloc() and module_memfree() calls into
__weak functions so they can be overridden.

Patch #2 implements the new alloc/free overrides for arm64

Changes since v3:
- drop 'const' modifier for free() hook void* argument
- move the dedicated BPF region to before the module region, putting it
  within 4GB of the module and kernel regions on non-KASLR kernels

Changes since v2:
- properly build time and runtime tested this time (log after the diffstat)
- create a dedicated 128 MB region at the top of the vmalloc space for BPF
  programs, ensuring that the programs will be in branching range of each
  other (which we currently rely upon) but at an arbitrary distance from
  the kernel and modules (which we don't care about)

Changes since v1:
- Drop misguided attempt to 'fix' and refactor the free path. Instead,
  just add another __weak wrapper for the invocation of module_memfree()
====================

Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Jann Horn <jannh@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-05 16:36:30 +01:00
Ard Biesheuvel
91fc957c9b arm64/bpf: don't allocate BPF JIT programs in module memory
The arm64 module region is a 128 MB region that is kept close to
the core kernel, in order to ensure that relative branches are
always in range. So using the same region for programs that do
not have this restriction is wasteful, and preferably avoided.

Now that the core BPF JIT code permits the alloc/free routines to
be overridden, implement them by vmalloc()/vfree() calls from a
dedicated 128 MB region set aside for BPF programs. This ensures
that BPF programs are still in branching range of each other, which
is something the JIT currently depends upon (and is not guaranteed
when using module_alloc() on KASLR kernels like we do currently).
It also ensures that placement of BPF programs does not correlate
with the placement of the core kernel or modules, making it less
likely that leaking the former will reveal the latter.

This also solves an issue under KASAN, where shadow memory is
needlessly allocated for all BPF programs (which don't require KASAN
shadow pages since they are not KASAN instrumented)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-05 16:36:28 +01:00
Ard Biesheuvel
dc002bb62f bpf: add __weak hook for allocating executable memory
By default, BPF uses module_alloc() to allocate executable memory,
but this is not necessary on all arches and potentially undesirable
on some of them.

So break out the module_alloc() and module_memfree() calls into __weak
functions to allow them to be overridden in arch code.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-05 16:36:28 +01:00
Alexei Starovoitov
2a95471c33 Merge branch 'prog_test_run-improvement'
Lorenz Bauer says:

====================
Right now, there is no safe way to use BPF_PROG_TEST_RUN with data_out.
This is because bpf_test_finish copies the output buffer to user space
without checking its size. This can lead to the kernel overwriting
data in user space after the buffer if xdp_adjust_head and friends are
in play.

Thanks to everyone for their advice and patience with this patch set!

Changes in v5:
* Fix up libbpf.map

Changes in v4:
* Document bpf_prog_test_run and bpf_prog_test_run_xattr
* Use struct bpf_prog_test_run_attr for return values

Changes in v3:
* Introduce bpf_prog_test_run_xattr instead of modifying the existing
  function

Changes in v2:
* Make the syscall return ENOSPC if data_size_out is too small
* Make bpf_prog_test_run return EINVAL if size_out is missing
* Document the new behaviour of data_size_out
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-04 08:18:14 -08:00
Lorenz Bauer
df47fc376d selftests: add a test for bpf_prog_test_run_xattr
Make sure that bpf_prog_test_run_xattr returns the correct length
and that the kernel respects the output size hint. Also check
that errno indicates ENOSPC if there is a short output buffer given.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-04 08:18:13 -08:00
Lorenz Bauer
64a975913b libbpf: add bpf_prog_test_run_xattr
Add a new function, which encourages safe usage of the test interface.
bpf_prog_test_run continues to work as before, but should be considered
unsafe.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-04 08:18:13 -08:00
Lorenz Bauer
2587a974f1 tools: sync uapi/linux/bpf.h
Pull changes from "bpf: respect size hint to BPF_PROG_TEST_RUN if present".

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-04 08:18:13 -08:00
Lorenz Bauer
b5a36b1e1b bpf: respect size hint to BPF_PROG_TEST_RUN if present
Use data_size_out as a size hint when copying test output to user space.
ENOSPC is returned if the output buffer is too small.
Callers which so far did not set data_size_out are not affected.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-04 08:18:13 -08:00
Daniel T. Lee
d59dd69d55 samples: bpf: fix: seg fault with NULL pointer arg
When NULL pointer accidentally passed to write_kprobe_events,
due to strlen(NULL), segmentation fault happens.
Changed code returns -1 to deal with this situation.

Bug issued with Smatch, static analysis.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-03 23:58:03 +01:00
Quentin Monnet
90b1023f68 bpf: fix documentation for eBPF helpers
The missing indentation on the "Return" sections for bpf_map_pop_elem()
and bpf_map_peek_elem() helpers break RST and man pages generation. This
patch fixes them, and moves the description of those two helpers towards
the end of the list (even though they are somehow related to the three
first helpers for maps, the man page explicitly states that the helpers
are sorted in chronological order).

While at it, bring other minor formatting edits for eBPF helpers
documentation: mostly blank lines removal, RST formatting, or other
small nits for consistency.

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-03 21:38:17 +01:00
Petar Penkov
e3da08d057 bpf: allow BPF read access to qdisc pkt_len
The pkt_len field in qdisc_skb_cb stores the skb length as it will
appear on the wire after segmentation. For byte accounting, this value
is more accurate than skb->len. It is computed on entry to the TC
layer, so only valid there.

Allow read access to this field from BPF tc classifier and action
programs. The implementation is analogous to tc_classid, aside from
restricting to read access.

To distinguish it from skb->len and self-describe export as wire_len.

Changes v1->v2
  - Rename pkt_len to wire_len

Signed-off-by: Petar Penkov <ppenkov@google.com>
Signed-off-by: Vlad Dumitrescu <vladum@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-03 21:37:51 +01:00
Andrey Ignatov
de94b651ee libbpf: Fix license in README.rst
The whole libbpf is licensed as (LGPL-2.1 OR BSD-2-Clause). I missed it
while adding README.rst. Fix it and use same license as all other files
in libbpf do. Since I'm the only author of README.rst so far, no others'
permissions should be needed.

Fixes: 76d1b894c5 ("libbpf: Document API and ABI conventions")
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-03 21:36:38 +01:00
Martin KaFai Lau
5482e9a93c bpf: Fix memleak in aux->func_info and aux->btf
The aux->func_info and aux->btf are leaked in the error out cases
during bpf_prog_load().  This patch fixes it.

Fixes: ba64e7d852 ("bpf: btf: support proper non-jit func info")
Cc: Yonghong Song <yhs@fb.com>
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-02 09:12:58 -08:00
Alexei Starovoitov
71fc156ff5 Merge branch 'xdp1-improvements'
Matteo Croce says:

====================
Small improvements to improve the readability and easiness
to use of the xdp1 sample.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 22:06:42 -08:00
Matteo Croce
dc378a1ab5 samples: bpf: get ifindex from ifname
Find the ifindex with if_nametoindex() instead of requiring the
numeric ifindex.

Signed-off-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 22:06:41 -08:00
Matteo Croce
d606ee5c1d samples: bpf: improve xdp1 example
Store only the total packet count for every protocol, instead of the
whole per-cpu array.
Use bpf_map_get_next_key() to iterate the map, instead of looking up
all the protocols.

Signed-off-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 22:06:41 -08:00
Alexei Starovoitov
9ffd05d9b7 Merge branch 'improve-test-coverage-sparc'
David Miller says:

====================
On sparc64 a ton of test cases in test_verifier.c fail because
the memory accesses in the test case are unaligned (or cannot
be proven to be aligned by the verifier).

Perhaps we can eventually try to (carefully) modify each test case
which has this problem to not use unaligned accesses but:

1) That is delicate work.

2) The changes might not fully respect the original
   intention of the testcase.

3) In some cases, such a transformation might not even
   be feasible at all.

So add an "any alignment" flag to tell the verifier to forcefully
disable it's alignment checks completely.

test_verifier.c is then annotated to use this flag when necessary.

The presence of the flag in each test case is good documentation to
anyone who wants to actually tackle the job of eliminating the
unaligned memory accesses in the test cases.

I've also seen several weird things in test cases, like trying to
access __skb->mark in a packet buffer.

This gets rid of 104 test_verifier.c failures on sparc64.

Changes since v1:

1) Explain the new BPF_PROG_LOAD flag in easier to understand terms.
   Suggested by Alexei.

2) Make bpf_verify_program() just take a __u32 prog_flags instead of
   just accumulating boolean arguments over and over.  Also suggested
   by Alexei.

Changes since RFC:

1) Only the admin can allow the relaxation of alignment restrictions
   on inefficient unaligned access architectures.

2) Use F_NEEDS_EFFICIENT_UNALIGNED_ACCESS instead of making a new
   flag.

3) Annotate in the output, when we have a test case that the verifier
   accepted but we did not try to execute because we are on an
   inefficient unaligned access platform.  Maybe with some arch
   machinery we can avoid this in the future.

Signed-off-by: David S. Miller <davem@davemloft.net>
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 21:38:49 -08:00
David Miller
0a68632488 bpf: Apply F_NEEDS_EFFICIENT_UNALIGNED_ACCESS to more ACCEPT test cases.
If a testcase has alignment problems but is expected to be ACCEPT,
verify it using F_NEEDS_EFFICIENT_UNALIGNED_ACCESS too.

Maybe in the future if we add some architecture specific code to elide
the unaligned memory access warnings during the test, we can execute
these as well.

Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 21:38:48 -08:00
David Miller
2acc5fd5b8 bpf: Make more use of 'any' alignment in test_verifier.c
Use F_NEEDS_EFFICIENT_UNALIGNED_ACCESS in more tests where the
expected result is REJECT.

Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 21:38:48 -08:00
David Miller
c7665702d3 bpf: Adjust F_NEEDS_EFFICIENT_UNALIGNED_ACCESS handling in test_verifier.c
Make it set the flag argument to bpf_verify_program() which will relax
the alignment restrictions.

Now all such test cases will go properly through the verifier even on
inefficient unaligned access architectures.

On inefficient unaligned access architectures do not try to run such
programs, instead mark the test case as passing but annotate the
result similarly to how it is done now in the presence of this flag.

So, we get complete full coverage for all REJECT test cases, and at
least verifier level coverage for ACCEPT test cases.

Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 21:38:48 -08:00
David Miller
e9ee9efc0d bpf: Add BPF_F_ANY_ALIGNMENT.
Often we want to write tests cases that check things like bad context
offset accesses.  And one way to do this is to use an odd offset on,
for example, a 32-bit load.

This unfortunately triggers the alignment checks first on platforms
that do not set CONFIG_EFFICIENT_UNALIGNED_ACCESS.  So the test
case see the alignment failure rather than what it was testing for.

It is often not completely possible to respect the original intention
of the test, or even test the same exact thing, while solving the
alignment issue.

Another option could have been to check the alignment after the
context and other validations are performed by the verifier, but
that is a non-trivial change to the verifier.

Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 21:38:48 -08:00
David Miller
88945f4606 bpf: Fix verifier log string check for bad alignment.
The message got changed a lot time ago.

This was responsible for 36 test case failures on sparc64.

Fixes: f1174f77b5 ("bpf/verifier: rework value tracking")
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 21:38:35 -08:00
Alexei Starovoitov
779a459719 Merge branch 'bpftool-fixes'
Quentin Monnet says:

====================
Hi,
Several items for bpftool are included in this set: the first three patches
are fixes for bpftool itself and bash completion, while the last two
slightly improve the information obtained when dumping programs or maps, on
Daniel's suggestion. Please refer to individual commit logs for more
details.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 14:06:29 -08:00
Quentin Monnet
99a44bef58 tools: bpftool: add owner_prog_type and owner_jited to bpftool output
For prog array maps, the type of the owner program, and the JIT-ed state
of that program, are available from the file descriptor information
under /proc. Add them to "bpftool map show" output. Example output:

    # bpftool map show
    158225: prog_array  name jmp_table  flags 0x0
        key 4B  value 4B  max_entries 8  memlock 4096B
        owner_prog_type flow_dissector  owner jited
    # bpftool --json --pretty map show
    [{
            "id": 1337,
            "type": "prog_array",
            "name": "jmp_table",
            "flags": 0,
            "bytes_key": 4,
            "bytes_value": 4,
            "max_entries": 8,
            "bytes_memlock": 4096,
            "owner_prog_type": "flow_dissector",
            "owner_jited": true
        }
    ]

As we move the table used for associating names to program types,
complete it with the missing types (lwt_seg6local and sk_reuseport).
Also add missing types to the help message for "bpftool prog"
(sk_reuseport and flow_dissector).

Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 14:06:07 -08:00
Quentin Monnet
73f0b9db53 tools: bpftool: mark offloaded programs more explicitly in plain output
In bpftool (plain) output for "bpftool prog show" or "bpftool map show",
an offloaded BPF object is simply denoted with "dev ifname", which is
not really explicit. Change it with something that clearly shows the
program is offloaded.

While at it also add an additional space, as done between other
information fields.

Example output, before:

    # bpftool prog show
    1337: xdp  tag a04f5eef06a7f555 dev foo
            loaded_at 2018-10-19T16:40:36+0100  uid 0
            xlated 16B  not jited  memlock 4096B

After:

    # bpftool prog show
    1337: xdp  tag a04f5eef06a7f555  offloaded_to foo
            loaded_at 2018-10-19T16:40:36+0100  uid 0
            xlated 16B  not jited  memlock 4096B

Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 14:06:06 -08:00
Quentin Monnet
4f0fc6f571 tools: bpftool: fix bash completion for new map types (queue and stack)
Commit 197c2dac74 ("bpf: Add BPF_MAP_TYPE_QUEUE and BPF_MAP_TYPE_STACK
to bpftool-map") added support for queue and stack eBPF map types in
bpftool map handling. Let's update the bash completion accordingly.

Fixes: 197c2dac74 ("bpf: Add BPF_MAP_TYPE_QUEUE and BPF_MAP_TYPE_STACK to bpftool-map")
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 14:06:06 -08:00
Quentin Monnet
cad4977344 tools: bpftool: fix bash completion for bpftool prog (attach|detach)
Fix bash completion for "bpftool prog (attach|detach) PROG TYPE MAP" so
that the list of indices proposed for MAP are map indices, and not PROG
indices. Also use variables for map and prog reference types ("id",
"pinned", and "tag" for programs).

Fixes: b7d3826c2e ("bpf: bpftool, add support for attaching programs to maps")
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 14:06:06 -08:00
Quentin Monnet
327e5dab6d tools: bpftool: use "/proc/self/" i.o. crafting links with getpid()
The getpid() function is called in a couple of places in bpftool to
craft links of the shape "/proc/<pid>/...". Instead, it is possible to
use the "/proc/self/" shortcut, which makes things a bit easier, in
particular in jit_disasm.c.

Do the replacement, and remove the includes of <sys/types.h> from the
relevant files, now we do not use getpid() anymore.

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30 14:06:06 -08:00
Ard Biesheuvel
cc2b8ed136 arm64/bpf: use movn/movk/movk sequence to generate kernel addresses
On arm64, all executable code is guaranteed to reside in the vmalloc
space (or the module space), and so jump targets will only use 48
bits at most, and the remaining bits are guaranteed to be 0x1.

This means we can generate an immediate jump address using a sequence
of one MOVN (move wide negated) and two MOVK instructions, where the
first one sets the lower 16 bits but also sets all top bits to 0x1.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-30 10:23:25 +01:00
David S. Miller
93029d7d40 Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:

====================
bpf-next 2018-11-30

The following pull-request contains BPF updates for your *net-next* tree.

(Getting out bit earlier this time to pull in a dependency from bpf.)

The main changes are:

1) Add libbpf ABI versioning and document API naming conventions
   as well as ABI versioning process, from Andrey.

2) Add a new sk_msg_pop_data() helper for sk_msg based BPF
   programs that is used in conjunction with sk_msg_push_data()
   for adding / removing meta data to the msg data, from John.

3) Optimize convert_bpf_ld_abs() for 0 offset and fix various
   lib and testsuite build failures on 32 bit, from David.

4) Make BPF prog dump for !JIT identical to how we dump subprogs
   when JIT is in use, from Yonghong.

5) Rename btf_get_from_id() to make it more conform with libbpf
   API naming conventions, from Martin.

6) Add a missing BPF kselftest config item, from Naresh.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-29 18:15:07 -08:00