Add a thread stack function to create a branch stack for hardware events
where the sample records get created some time after the event occurred.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200429150751.12570-7-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Intel PT already has support for creating branch stacks for each context
(per-cpu or per-thread). In the more common per-cpu case, the branch stack
is not separated for different threads, instead being cleared in between
each sample.
That approach will not work very well for adding branch stacks to
regular events. The branch stacks really need to be accumulated
separately for each thread.
As a start to accomplishing that, this patch adds support for putting
branch stack support into the thread-stack. The advantages are:
1. the branches are accumulated separately for each thread
2. the branch stack is cleared only in between continuous traces
This helps pave the way for adding branch stacks to regular events, not
just synthesized events as at present.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200429150751.12570-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add a thread stack function to create a call chain for hardware events
where the sample records get created some time after the event occurred.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200401101613.6201-10-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
One more step on the merge of 'struct maps' with 'struct map_groups'.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-69vcr8pubpym90skxhmbwhiw@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
All we need there is a forward declaration for 'union perf_event', so
remove it from there and add missing header directives in places using
things from this indirect include.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-7ftk0ztstqub1tirjj8o8xbl@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Eroding a bit more the tools/perf/util/util.h hodpodge header.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-natazosyn9rwjka25tvcnyi0@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Part of the erosion of util/util.h, that will lose its include stdlib.h,
we need to add it to places where it is needed but was getting it
indirectly.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-1imnqezw99ahc07fjeb51qby@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Use new function thread_stack__pop_ks() in place of equivalent code.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190619064429.14940-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Commit f08046cb30 ("perf thread-stack: Represent jmps to the start of a
different symbol") had the side-effect of introducing more stack entries
before return from kernel space.
When user space is also traced, those entries are popped before entry to
user space, but when user space is not traced, they get stuck at the
bottom of the stack, making the stack grow progressively larger.
Fix by detecting a return-from-kernel branch type, and popping kernel
addresses from the stack then.
Note, the problem and fix affect the exported Call Graph / Tree but not
the callindent option used by "perf script --call-trace".
Example:
perf-with-kcore record example -e intel_pt//k -- ls
perf-with-kcore script example --itrace=bep -s ~/libexec/perf-core/scripts/python/export-to-sqlite.py example.db branches calls
~/libexec/perf-core/scripts/python/exported-sql-viewer.py example.db
Menu option: Reports -> Context-Sensitive Call Graph
Before: (showing Call Path column only)
Call Path
▶ perf
▼ ls
▼ 12111:12111
▶ setup_new_exec
▶ __task_pid_nr_ns
▶ perf_event_pid_type
▶ perf_event_comm_output
▶ perf_iterate_ctx
▶ perf_iterate_sb
▶ perf_event_comm
▶ __set_task_comm
▶ load_elf_binary
▶ search_binary_handler
▶ __do_execve_file.isra.41
▶ __x64_sys_execve
▶ do_syscall_64
▼ entry_SYSCALL_64_after_hwframe
▼ swapgs_restore_regs_and_return_to_usermode
▼ native_iret
▶ error_entry
▶ do_page_fault
▼ error_exit
▼ retint_user
▶ prepare_exit_to_usermode
▼ native_iret
▶ error_entry
▶ do_page_fault
▼ error_exit
▼ retint_user
▶ prepare_exit_to_usermode
▼ native_iret
▶ error_entry
▶ do_page_fault
▼ error_exit
▼ retint_user
▶ prepare_exit_to_usermode
▶ native_iret
After: (showing Call Path column only)
Call Path
▶ perf
▼ ls
▼ 12111:12111
▶ setup_new_exec
▶ __task_pid_nr_ns
▶ perf_event_pid_type
▶ perf_event_comm_output
▶ perf_iterate_ctx
▶ perf_iterate_sb
▶ perf_event_comm
▶ __set_task_comm
▶ load_elf_binary
▶ search_binary_handler
▶ __do_execve_file.isra.41
▶ __x64_sys_execve
▶ do_syscall_64
▶ entry_SYSCALL_64_after_hwframe
▶ page_fault
▼ entry_SYSCALL_64
▼ do_syscall_64
▶ __x64_sys_brk
▶ __x64_sys_access
▶ __x64_sys_openat
▶ __x64_sys_newfstat
▶ __x64_sys_mmap
▶ __x64_sys_close
▶ __x64_sys_read
▶ __x64_sys_mprotect
▶ __x64_sys_arch_prctl
▶ __x64_sys_munmap
▶ exit_to_usermode_loop
▶ __x64_sys_set_tid_address
▶ __x64_sys_set_robust_list
▶ __x64_sys_rt_sigaction
▶ __x64_sys_rt_sigprocmask
▶ __x64_sys_prlimit64
▶ __x64_sys_statfs
▶ __x64_sys_ioctl
▶ __x64_sys_getdents64
▶ __x64_sys_write
▶ __x64_sys_exit_group
Committer notes:
The first arg to the perf-with-kcore needs to be the same for the
'record' and 'script' lines, otherwise we'll record the perf.data file
and kcore_dir/ files in one directory ('example') to then try to use it
from the 'bep' directory, fix the instructions above it so that both use
'example'.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: stable@vger.kernel.org
Fixes: f08046cb30 ("perf thread-stack: Represent jmps to the start of a different symbol")
Link: http://lkml.kernel.org/r/20190619064429.14940-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf record:
Alexey Budankov:
- Allow mixing --user-regs with --call-graph=dwarf, making sure that
the minimal set of registers for DWARF unwinding is present in the
set of user registers requested to be present in each sample, while
warning the user that this may make callchains unreliable if more
that the minimal set of registers is needed to unwind.
yuzhoujian:
- Add support to collect callchains from kernel or user space only,
IOW allow setting the perf_event_attr.exclude_callchain_{kernel,user}
bits from the command line.
perf trace:
Arnaldo Carvalho de Melo:
- Remove x86_64 specific syscall numbers from the augmented_raw_syscalls
BPF in-kernel collector of augmented raw_syscalls:sys_{enter,exit}
payloads, use instead the syscall numbers obtainer either by the
arch specific syscalltbl generators or from audit-libs.
- Allow 'perf trace' to ask for the number of bytes to collect for
string arguments, for now ask for PATH_MAX, i.e. the whole
pathnames, which ends up being just a way to speficy which syscall
args are pathnames and thus should be read using bpf_probe_read_str().
- Skip unknown syscalls when expanding strace like syscall groups.
This helps using the 'string' group of syscalls to work in arm64,
where some of the syscalls present in x86_64 that deal with
strings, for instance 'access', are deprecated and this should not
be asked for tracing.
Leo Yan:
- Exit when failing to build eBPF program.
perf config:
Arnaldo Carvalho de Melo:
- Bail out when a handler returns failure for a key-value pair. This
helps with cases where processing a key-value pair is not just a
matter of setting some tool specific knob, involving, for instance
building a BPF program to then attach to the list of events 'perf
trace' will use, e.g. augmented_raw_syscalls.c.
perf.data:
Kan Liang:
- Read and store die ID information available in new Intel processors
in CPUID.1F in the CPU topology written in the perf.data header.
perf stat:
Kan Liang:
- Support per-die aggregation.
Documentation:
Arnaldo Carvalho de Melo:
- Update perf.data documentation about the CPU_TOPOLOGY, MEM_TOPOLOGY,
CLOCKID and DIR_FORMAT headers.
Song Liu:
- Add description of headers HEADER_BPF_PROG_INFO and HEADER_BPF_BTF.
Leo Yan:
- Update default value for llvm.clang-bpf-cmd-template in 'man perf-config'.
JVMTI:
Jiri Olsa:
- Address gcc string overflow warning for strncpy()
core:
- Remove superfluous nthreads system_wide setup in perf_evsel__alloc_fd().
Intel PT:
Adrian Hunter:
- Add support for samples to contain IPC ratio, collecting cycles
information from CYC packets, showing the IPC info periodically, because
Intel PT does not update the cycle count on every branch or instruction,
the incremental values will often be zero. When there are values, they
will be the number of instructions and number of cycles since the last
update, and thus represent the average IPC since the last IPC value.
E.g.:
# perf record --cpu 1 -m200000 -a -e intel_pt/cyc/u sleep 0.0001
rounding mmap pages size to 1024M (262144 pages)
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 2.208 MB perf.data ]
# perf script --insn-trace --xed -F+ipc,-dso,-cpu,-tid
#
<SNIP + add line numbering to make sense of IPC counts e.g.: (18/3)>
1 cc1 63501.650479626: 7f5219ac27bf _int_free+0x3f jnz 0x7f5219ac2af0 IPC: 0.81 (36/44)
2 cc1 63501.650479626: 7f5219ac27c5 _int_free+0x45 cmp $0x1f, %rbp
3 cc1 63501.650479626: 7f5219ac27c9 _int_free+0x49 jbe 0x7f5219ac2b00
4 cc1 63501.650479626: 7f5219ac27cf _int_free+0x4f test $0x8, %al
5 cc1 63501.650479626: 7f5219ac27d1 _int_free+0x51 jnz 0x7f5219ac2b00
6 cc1 63501.650479626: 7f5219ac27d7 _int_free+0x57 movq 0x13c58a(%rip), %rcx
7 cc1 63501.650479626: 7f5219ac27de _int_free+0x5e mov %rdi, %r12
8 cc1 63501.650479626: 7f5219ac27e1 _int_free+0x61 movq %fs:(%rcx), %rax
9 cc1 63501.650479626: 7f5219ac27e5 _int_free+0x65 test %rax, %rax
10 cc1 63501.650479626: 7f5219ac27e8 _int_free+0x68 jz 0x7f5219ac2821
11 cc1 63501.650479626: 7f5219ac27ea _int_free+0x6a leaq -0x11(%rbp), %rdi
12 cc1 63501.650479626: 7f5219ac27ee _int_free+0x6e mov %rdi, %rsi
13 cc1 63501.650479626: 7f5219ac27f1 _int_free+0x71 shr $0x4, %rsi
14 cc1 63501.650479626: 7f5219ac27f5 _int_free+0x75 cmpq %rsi, 0x13caf4(%rip)
15 cc1 63501.650479626: 7f5219ac27fc _int_free+0x7c jbe 0x7f5219ac2821
16 cc1 63501.650479626: 7f5219ac2821 _int_free+0xa1 cmpq 0x13f138(%rip), %rbp
17 cc1 63501.650479626: 7f5219ac2828 _int_free+0xa8 jnbe 0x7f5219ac28d8
18 cc1 63501.650479626: 7f5219ac28d8 _int_free+0x158 testb $0x2, 0x8(%rbx)
19 cc1 63501.650479628: 7f5219ac28dc _int_free+0x15c jnz 0x7f5219ac2ab0 IPC: 6.00 (18/3)
<SNIP>
- Allow using time ranges with Intel PT, i.e. these features, already
present but not optimially usable with Intel PT, should be now:
Select the second 10% time slice:
$ perf script --time 10%/2
Select from 0% to 10% time slice:
$ perf script --time 0%-10%
Select the first and second 10% time slices:
$ perf script --time 10%/1,10%/2
Select from 0% to 10% and 30% to 40% slices:
$ perf script --time 0%-10%,30%-40%
cs-etm (ARM):
Mathieu Poirier:
- Add support for CPU-wide trace scenarios.
s390:
Thomas Richter:
- Fix missing kvm module load for s390.
- Fix OOM error in TUI mode on s390
- Support s390 diag event display when doing analysis on !s390
architectures.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQR2GiIUctdOfX2qHhGyPKLppCJ+JwUCXP/1xQAKCRCyPKLppCJ+
J9xcAQCwOITAshE7op7HbKUPtkqiMNu+hpNa3skhxEpGHvKO0AEArpBXtuvEP8EU
PZsp+8vcVrlZ+dZutttgvkRz25mScg8=
=kfFb
-----END PGP SIGNATURE-----
Merge tag 'perf-core-for-mingo-5.3-20190611' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
perf record:
Alexey Budankov:
- Allow mixing --user-regs with --call-graph=dwarf, making sure that
the minimal set of registers for DWARF unwinding is present in the
set of user registers requested to be present in each sample, while
warning the user that this may make callchains unreliable if more
that the minimal set of registers is needed to unwind.
yuzhoujian:
- Add support to collect callchains from kernel or user space only,
IOW allow setting the perf_event_attr.exclude_callchain_{kernel,user}
bits from the command line.
perf trace:
Arnaldo Carvalho de Melo:
- Remove x86_64 specific syscall numbers from the augmented_raw_syscalls
BPF in-kernel collector of augmented raw_syscalls:sys_{enter,exit}
payloads, use instead the syscall numbers obtainer either by the
arch specific syscalltbl generators or from audit-libs.
- Allow 'perf trace' to ask for the number of bytes to collect for
string arguments, for now ask for PATH_MAX, i.e. the whole
pathnames, which ends up being just a way to speficy which syscall
args are pathnames and thus should be read using bpf_probe_read_str().
- Skip unknown syscalls when expanding strace like syscall groups.
This helps using the 'string' group of syscalls to work in arm64,
where some of the syscalls present in x86_64 that deal with
strings, for instance 'access', are deprecated and this should not
be asked for tracing.
Leo Yan:
- Exit when failing to build eBPF program.
perf config:
Arnaldo Carvalho de Melo:
- Bail out when a handler returns failure for a key-value pair. This
helps with cases where processing a key-value pair is not just a
matter of setting some tool specific knob, involving, for instance
building a BPF program to then attach to the list of events 'perf
trace' will use, e.g. augmented_raw_syscalls.c.
perf.data:
Kan Liang:
- Read and store die ID information available in new Intel processors
in CPUID.1F in the CPU topology written in the perf.data header.
perf stat:
Kan Liang:
- Support per-die aggregation.
Documentation:
Arnaldo Carvalho de Melo:
- Update perf.data documentation about the CPU_TOPOLOGY, MEM_TOPOLOGY,
CLOCKID and DIR_FORMAT headers.
Song Liu:
- Add description of headers HEADER_BPF_PROG_INFO and HEADER_BPF_BTF.
Leo Yan:
- Update default value for llvm.clang-bpf-cmd-template in 'man perf-config'.
JVMTI:
Jiri Olsa:
- Address gcc string overflow warning for strncpy()
core:
- Remove superfluous nthreads system_wide setup in perf_evsel__alloc_fd().
Intel PT:
Adrian Hunter:
- Add support for samples to contain IPC ratio, collecting cycles
information from CYC packets, showing the IPC info periodically, because
Intel PT does not update the cycle count on every branch or instruction,
the incremental values will often be zero. When there are values, they
will be the number of instructions and number of cycles since the last
update, and thus represent the average IPC since the last IPC value.
E.g.:
# perf record --cpu 1 -m200000 -a -e intel_pt/cyc/u sleep 0.0001
rounding mmap pages size to 1024M (262144 pages)
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 2.208 MB perf.data ]
# perf script --insn-trace --xed -F+ipc,-dso,-cpu,-tid
#
<SNIP + add line numbering to make sense of IPC counts e.g.: (18/3)>
1 cc1 63501.650479626: 7f5219ac27bf _int_free+0x3f jnz 0x7f5219ac2af0 IPC: 0.81 (36/44)
2 cc1 63501.650479626: 7f5219ac27c5 _int_free+0x45 cmp $0x1f, %rbp
3 cc1 63501.650479626: 7f5219ac27c9 _int_free+0x49 jbe 0x7f5219ac2b00
4 cc1 63501.650479626: 7f5219ac27cf _int_free+0x4f test $0x8, %al
5 cc1 63501.650479626: 7f5219ac27d1 _int_free+0x51 jnz 0x7f5219ac2b00
6 cc1 63501.650479626: 7f5219ac27d7 _int_free+0x57 movq 0x13c58a(%rip), %rcx
7 cc1 63501.650479626: 7f5219ac27de _int_free+0x5e mov %rdi, %r12
8 cc1 63501.650479626: 7f5219ac27e1 _int_free+0x61 movq %fs:(%rcx), %rax
9 cc1 63501.650479626: 7f5219ac27e5 _int_free+0x65 test %rax, %rax
10 cc1 63501.650479626: 7f5219ac27e8 _int_free+0x68 jz 0x7f5219ac2821
11 cc1 63501.650479626: 7f5219ac27ea _int_free+0x6a leaq -0x11(%rbp), %rdi
12 cc1 63501.650479626: 7f5219ac27ee _int_free+0x6e mov %rdi, %rsi
13 cc1 63501.650479626: 7f5219ac27f1 _int_free+0x71 shr $0x4, %rsi
14 cc1 63501.650479626: 7f5219ac27f5 _int_free+0x75 cmpq %rsi, 0x13caf4(%rip)
15 cc1 63501.650479626: 7f5219ac27fc _int_free+0x7c jbe 0x7f5219ac2821
16 cc1 63501.650479626: 7f5219ac2821 _int_free+0xa1 cmpq 0x13f138(%rip), %rbp
17 cc1 63501.650479626: 7f5219ac2828 _int_free+0xa8 jnbe 0x7f5219ac28d8
18 cc1 63501.650479626: 7f5219ac28d8 _int_free+0x158 testb $0x2, 0x8(%rbx)
19 cc1 63501.650479628: 7f5219ac28dc _int_free+0x15c jnz 0x7f5219ac2ab0 IPC: 6.00 (18/3)
<SNIP>
- Allow using time ranges with Intel PT, i.e. these features, already
present but not optimially usable with Intel PT, should be now:
Select the second 10% time slice:
$ perf script --time 10%/2
Select from 0% to 10% time slice:
$ perf script --time 0%-10%
Select the first and second 10% time slices:
$ perf script --time 10%/1,10%/2
Select from 0% to 10% and 30% to 40% slices:
$ perf script --time 0%-10%,30%-40%
cs-etm (ARM):
Mathieu Poirier:
- Add support for CPU-wide trace scenarios.
s390:
Thomas Richter:
- Fix missing kvm module load for s390.
- Fix OOM error in TUI mode on s390
- Support s390 diag event display when doing analysis on !s390
architectures.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms and conditions of the gnu general public license
version 2 as published by the free software foundation this program
is distributed in the hope it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 263 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190529141901.208660670@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cycle and instruction counts are added to the stack. The IPC of a
function and all functions it calls, is also recorded.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190520113728.14389-14-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The call_path can be used to find the parent symbol for a call but not
the exact parent call. To do that add parent_id to the call_return
export. This enables the creation of a call tree from the exported data.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: https://lkml.kernel.org/n/tip-6j7tzdxo67cox6kan7k22oo6@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The compiler might optimize a call/ret combination by making it a jmp.
However the thread-stack does not presently cater for that, so that such
control flow is not visible in the call graph. Make it visible by
recording on the stack a branch to the start of a different symbol.
Note, that means when a ret pops the stack, all jmps must be popped off
first.
Example:
$ cat jmp-to-fn.c
__attribute__((noinline)) int bar(void)
{
return -1;
}
__attribute__((noinline)) int foo(void)
{
return bar() + 1;
}
int main()
{
return foo();
}
$ gcc -ggdb3 -Wall -Wextra -O2 -o jmp-to-fn jmp-to-fn.c
$ objdump -d jmp-to-fn
<SNIP>
0000000000001040 <main>:
1040: 31 c0 xor %eax,%eax
1042: e9 09 01 00 00 jmpq 1150 <foo>
<SNIP>
0000000000001140 <bar>:
1140: b8 ff ff ff ff mov $0xffffffff,%eax
1145: c3 retq
<SNIP>
0000000000001150 <foo>:
1150: 31 c0 xor %eax,%eax
1152: e8 e9 ff ff ff callq 1140 <bar>
1157: 83 c0 01 add $0x1,%eax
115a: c3 retq
<SNIP>
$ perf record -o jmp-to-fn.perf.data -e intel_pt/cyc/u ./jmp-to-fn
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0,017 MB jmp-to-fn.perf.data ]
$ perf script -i jmp-to-fn.perf.data --itrace=be -s ~/libexec/perf-core/scripts/python/export-to-sqlite.py jmp-to-fn.db branches calls
2019-01-08 13:24:58.783069 Creating database...
2019-01-08 13:24:58.794650 Writing records...
2019-01-08 13:24:59.008050 Adding indexes
2019-01-08 13:24:59.015802 Done
$ ~/libexec/perf-core/scripts/python/exported-sql-viewer.py jmp-to-fn.db
Before:
main
-> bar
After:
main
-> foo
-> bar
Committer testing:
Install the python2-pyside package, then select these menu options
on the GUI:
"Reports"
"Context sensitive callgraphs"
Then go on expanding the symbols, to get, full picture when doing this
on a fedora:29 with gcc version 8.2.1 20181215 (Red Hat 8.2.1-6) (GCC):
jmp-to-fn
PID:TID
_start (ld-2.28.so)
__libc_start_main
main
foo
bar
To verify that indeed, this fixes the problem.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20190109091835.5570-5-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Make thread_stack__no_call_return() more readable by adding more local
variables.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20190109091835.5570-4-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
If 'cp' is checked in thread_stack__push_cp() a number of error checks
can be removed, reducing code size and improving readability.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20190109091835.5570-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf creates a single 'struct thread' to represent the idle task. That
is because threads are identified by PID and TID, and the idle task
always has PID == TID == 0.
However, there are actually separate idle tasks for each CPU. That
creates a problem for thread stack processing which assumes that each
thread has a single stack, not one stack per CPU.
Fix that by passing through the CPU number, and in the case of the idle
"thread", pick the thread stack from an array based on the CPU number.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20181221120620.9659-8-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
In preparation for fixing thread stack processing for the idle task,
allocate an array of thread stacks.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20181221120620.9659-7-adrian.hunter@intel.com
[ No need to check for NULL when calling zfree(), noticed by Jiri Olsa ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
In preparation for fixing thread stack processing for the idle task,
factor out thread_stack__init().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20181221120620.9659-6-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
In preparation for fixing thread stack processing for the idle task,
allow for a thread stack array.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20181221120620.9659-5-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
In preparation for fixing thread stack processing for the idle task,
avoid direct reference to the thread's stack. The thread stack will
change to an array of thread stacks, at which point the meaning of the
direct reference will change.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20181221120620.9659-4-adrian.hunter@intel.com
[ Rename thread_stack__ts() to thread__stack() since this operates on a 'thread' struct ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
In preparation for fixing thread stack processing for the idle task,
tidy thread_stack__bottom() usage. Specifically, the parameter 'thread'
is not needed.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20181221120620.9659-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
In preparation for fixing thread stack processing for the idle task,
simplify some code in thread_stack__process().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20181221120620.9659-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
In the absence of a fallback, callchains must encode also the callchain
context. Do that now there is no fallback.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: stable@vger.kernel.org # 4.19
Link: http://lkml.kernel.org/r/100ea2ec-ed14-b56d-d810-e0a6d2f4b069@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
thread_stack__process() is used to create call paths for database
export. Improve the handling of trace begin / end to allow for a trace
that ends in a call.
Previously, the Intel PT decoder would indicate begin / end by a branch
from / to zero. That hides useful information, in particular when a
trace ends with a call. Before remedying that, enhance the thread stack
so that it identifies the trace end by the flag instead of by ip == 0.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20180920130048.31432-5-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
thread_stack__event() is used to create call stacks, by keeping track of
calls and returns. Improve the handling of trace begin / end to allow
for a trace that ends in a call.
Previously, the Intel PT decoder would indicate begin / end by a branch
from / to zero. That hides useful information, in particular when a
trace ends with a call. Before remedying that, enhance the thread stack
so that it does not expect to see the 'return' for a 'call' that ends
the trace.
Committer notes:
Added this:
return thread_stack__push(thread->ts, ret_addr,
- flags && PERF_IP_FLAG_TRACE_END);
+ flags & PERF_IP_FLAG_TRACE_END);
To fix problem spotted by:
debian:9: clang version 3.8.1-24 (tags/RELEASE_381/final)
debian:experimental: clang version 6.0.1-6 (tags/RELEASE_601/final)
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20180920130048.31432-4-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Removing it from util.h, part of an effort to disentangle the includes
hell, that makes changes to util.h or something included by it to cause
a complete rebuild of the tools.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-ztrjy52q1rqcchuy3rubfgt2@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Based on patches from Andi Kleen.
When printing PT instruction traces with perf script it is rather useful
to see some indentation for the call tree. This patch adds a new
callindent field to perf script that prints spaces for the function call
stack depth.
We already have code to track the function call stack for PT, that we
can reuse with minor modifications.
The resulting output is not quite as nice as ftrace yet, but a lot
better than what was there before.
Note there are some corner cases when the thread stack gets code
confused and prints incorrect indentation. Even with that it is fairly
useful.
When displaying kernel code traces it is recommended to run as root, as
otherwise perf doesn't understand the kernel addresses properly, and may
not reset the call stack correctly on kernel boundaries.
Example output:
sudo perf-with-kcore record eg2 -a -e intel_pt// -- sleep 1
sudo perf-with-kcore script eg2 --ns -F callindent,time,comm,pid,sym,ip,addr,flags,cpu --itrace=cre | less
...
swapper 0 [000] 5830.389116586: call irq_exit ffffffff8104d620 smp_call_function_single_interrupt+0x30 => ffffffff8107e720 irq_exit
swapper 0 [000] 5830.389116586: call idle_cpu ffffffff8107e769 irq_exit+0x49 => ffffffff810a3970 idle_cpu
swapper 0 [000] 5830.389116586: return idle_cpu ffffffff810a39b7 idle_cpu+0x47 => ffffffff8107e76e irq_exit
swapper 0 [000] 5830.389116586: call tick_nohz_irq_exit ffffffff8107e7bd irq_exit+0x9d => ffffffff810f2fc0 tick_nohz_irq_exit
swapper 0 [000] 5830.389116919: call __tick_nohz_idle_enter ffffffff810f2fe0 tick_nohz_irq_exit+0x20 => ffffffff810f28d0 __tick_nohz_idle_enter
swapper 0 [000] 5830.389116919: call ktime_get ffffffff810f28f1 __tick_nohz_idle_enter+0x21 => ffffffff810e9ec0 ktime_get
swapper 0 [000] 5830.389116919: call read_tsc ffffffff810e9ef6 ktime_get+0x36 => ffffffff81035070 read_tsc
swapper 0 [000] 5830.389116919: return read_tsc ffffffff81035084 read_tsc+0x14 => ffffffff810e9efc ktime_get
swapper 0 [000] 5830.389116919: return ktime_get ffffffff810e9f46 ktime_get+0x86 => ffffffff810f28f6 __tick_nohz_idle_enter
swapper 0 [000] 5830.389116919: call sched_clock_idle_sleep_event ffffffff810f290b __tick_nohz_idle_enter+0x3b => ffffffff810a7380 sched_clock_idle_sleep_event
swapper 0 [000] 5830.389116919: call sched_clock_cpu ffffffff810a738b sched_clock_idle_sleep_event+0xb => ffffffff810a72e0 sched_clock_cpu
swapper 0 [000] 5830.389116919: call sched_clock ffffffff810a734d sched_clock_cpu+0x6d => ffffffff81035750 sched_clock
swapper 0 [000] 5830.389116919: call native_sched_clock ffffffff81035754 sched_clock+0x4 => ffffffff81035640 native_sched_clock
swapper 0 [000] 5830.389116919: return native_sched_clock ffffffff8103568c native_sched_clock+0x4c => ffffffff81035759 sched_clock
swapper 0 [000] 5830.389116919: return sched_clock ffffffff8103575c sched_clock+0xc => ffffffff810a7352 sched_clock_cpu
swapper 0 [000] 5830.389116919: return sched_clock_cpu ffffffff810a7356 sched_clock_cpu+0x76 => ffffffff810a7390 sched_clock_idle_sleep_event
swapper 0 [000] 5830.389116919: return sched_clock_idle_sleep_event ffffffff810a7391 sched_clock_idle_sleep_event+0x11 => ffffffff810f2910 __tick_nohz_idle_enter
...
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1466689258-28493-4-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This change allows python scripts to be able to utilize the recent
changes to the db export api allowing the export of call_paths derived
from sampled callchains. These call paths are also now associated with
the samples from which they were derived.
- This feature is enabled by setting "perf_db_export_callchains" to true
- When enabled, samples that have callchain information will have the
callchains exported via call_path_table
- The call_path_id field is added to sample_table to enable association of
samples with the corresponding callchain stored in the call paths
table. A call_path_id of 0 will be exported if there is no
corresponding callchain.
- When "perf_db_export_callchains" and "perf_db_export_calls" are both
set to True, the call path root data structure will be shared. This
prevents duplicating of data and call path ids that would result from
building two separate call path trees in memory.
- The call_return_processor structure definition was relocated to the header
file to make its contents visible to db-export.c. This enables the
sharing of call path trees between the two features, as mentioned
above.
This change is visible to python scripts using the python db export api.
The change is backwards compatible with scripts written against the
previous API, assuming that the scripts model the sample_table function
after the one in export-to-postgresql.py script by allowing for
additional arguments to be added in the future. ie. using *x as the
final argument of the sample_table function.
Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-6-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Move the call path handling code out of thread-stack.c and
thread-stack.h to allow other components that are not part of
thread-stack to create call paths.
Summary:
- Create call-path.c and call-path.h and add them to the build.
- Move all call path related code out of thread-stack.c and thread-stack.h
and into call-path.c and call-path.h.
- A small subset of structures and functions are now visible through
call-path.h, which is required for thread-stack.c to continue to
compile.
This change is a prerequisite for subsequent patches in this change set
and by itself contains no user-visible changes.
Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-3-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The thread-stack represents a thread's current stack. When a thread
exits there can still be many functions on the stack e.g. exit() can be
called many levels deep, so all the callers will never return. To get
that information output, the thread-stack must be flushed.
Previously it was assumed the thread-stack would be flushed when the
struct thread was deleted. With thread ref-counting it is no longer
clear when that will be, if ever. So instead explicitly flush all the
thread-stacks at the end of a session.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1432906425-9911-3-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Enhance the thread stack to output detailed information about paired
calls and returns.
The enhanced processing consumes sample information via
thread_stack__process() and outputs information about paired calls /
returns via a call-back.
While the call-back makes it possible for the facility to be used by
arbitrary tools, a subsequent patch will provide the information to
Python scripting via the db-export interface.
An important part of the call/return information is the
call path which provides a structure that defines a context
sensitive call graph.
Note that there are now two ways to use the thread stack.
For simply providing a call stack (like you would get from the perf
record -g option) the interface consists of thread_stack__event() and
thread_stack__sample().
Whereas the enhanced interface consists of call_return_processor__new()
and thread_stack__process().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1414678188-14946-5-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add a thread stack for synthesizing call chains from call and return
events.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1414678188-14946-2-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>