License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2009-06-02 21:37:05 +00:00
|
|
|
/*
|
|
|
|
* builtin-report.c
|
|
|
|
*
|
|
|
|
* Builtin report command: Analyze the perf.data input file,
|
|
|
|
* look up and read DSOs and symbol information and display
|
|
|
|
* a histogram of results, along various sorting keys.
|
|
|
|
*/
|
2009-05-27 07:10:38 +00:00
|
|
|
#include "builtin.h"
|
2009-05-26 07:17:18 +00:00
|
|
|
|
2016-06-23 08:55:17 +00:00
|
|
|
#include "util/config.h"
|
2009-06-02 21:37:05 +00:00
|
|
|
|
2011-02-04 11:45:46 +00:00
|
|
|
#include "util/annotate.h"
|
2009-06-04 13:19:47 +00:00
|
|
|
#include "util/color.h"
|
2019-08-30 14:11:01 +00:00
|
|
|
#include "util/dso.h"
|
2009-07-01 17:46:08 +00:00
|
|
|
#include <linux/list.h>
|
2009-07-01 15:28:37 +00:00
|
|
|
#include <linux/rbtree.h>
|
2018-01-07 16:03:56 +00:00
|
|
|
#include <linux/err.h>
|
2019-07-04 14:32:27 +00:00
|
|
|
#include <linux/zalloc.h>
|
2019-01-27 12:42:37 +00:00
|
|
|
#include "util/map.h"
|
2009-05-28 17:55:04 +00:00
|
|
|
#include "util/symbol.h"
|
2019-08-30 18:09:54 +00:00
|
|
|
#include "util/map_symbol.h"
|
|
|
|
#include "util/mem-events.h"
|
|
|
|
#include "util/branch.h"
|
2009-06-26 14:28:01 +00:00
|
|
|
#include "util/callchain.h"
|
2009-08-07 11:55:24 +00:00
|
|
|
#include "util/values.h"
|
2009-05-18 15:45:42 +00:00
|
|
|
|
2009-05-26 07:17:18 +00:00
|
|
|
#include "perf.h"
|
2009-08-16 20:05:48 +00:00
|
|
|
#include "util/debug.h"
|
2011-03-06 00:40:06 +00:00
|
|
|
#include "util/evlist.h"
|
|
|
|
#include "util/evsel.h"
|
perf report: Add --switch-on/--switch-off events
Since 'perf top' shares the histogram browser with 'perf report', then
the same explanation in the previous cset applies.
An additional example uses a pair of SDT events available for systemtap:
# perf probe --exec=/usr/bin/stap '%*:*'
Added new events:
sdt_stap:benchmark__thread__start (on %* in /usr/bin/stap)
sdt_stap:benchmark (on %* in /usr/bin/stap)
sdt_stap:benchmark__thread__end (on %* in /usr/bin/stap)
sdt_stap:pass6__start (on %* in /usr/bin/stap)
sdt_stap:pass6__end (on %* in /usr/bin/stap)
sdt_stap:pass5__start (on %* in /usr/bin/stap)
sdt_stap:pass5__end (on %* in /usr/bin/stap)
sdt_stap:pass0__start (on %* in /usr/bin/stap)
sdt_stap:pass0__end (on %* in /usr/bin/stap)
sdt_stap:pass1a__start (on %* in /usr/bin/stap)
sdt_stap:pass1b__start (on %* in /usr/bin/stap)
sdt_stap:pass1__end (on %* in /usr/bin/stap)
sdt_stap:pass2__start (on %* in /usr/bin/stap)
sdt_stap:pass2__end (on %* in /usr/bin/stap)
sdt_stap:pass3__start (on %* in /usr/bin/stap)
sdt_stap:pass3__end (on %* in /usr/bin/stap)
sdt_stap:pass4__start (on %* in /usr/bin/stap)
sdt_stap:pass4__end (on %* in /usr/bin/stap)
sdt_stap:benchmark__start (on %* in /usr/bin/stap)
sdt_stap:benchmark__end (on %* in /usr/bin/stap)
sdt_stap:cache__get (on %* in /usr/bin/stap)
sdt_stap:cache__clean (on %* in /usr/bin/stap)
sdt_stap:cache__add__module (on %* in /usr/bin/stap)
sdt_stap:cache__add__source (on %* in /usr/bin/stap)
sdt_stap:stap_system__complete (on %* in /usr/bin/stap)
sdt_stap:stap_system__start (on %* in /usr/bin/stap)
sdt_stap:stap_system__spawn (on %* in /usr/bin/stap)
sdt_stap:stap_system__fork (on %* in /usr/bin/stap)
sdt_stap:intern_string (on %* in /usr/bin/stap)
sdt_stap:client__start (on %* in /usr/bin/stap)
sdt_stap:client__end (on %* in /usr/bin/stap)
You can now use it in all perf tools, such as:
perf record -e sdt_stap:client__end -aR sleep 1
#
From these we're use the two below to run systemtap's test suite:
# perf record -e sdt_stap:pass2__*,cycles:P make installcheck > /dev/null
^C[ perf record: Woken up 8 times to write data ]
[ perf record: Captured and wrote 2.691 MB perf.data (39638 samples) ]
Terminated
# perf script | grep sdt_stap
stap 28979 [000] 19424.302660: sdt_stap:pass2__start: (561b9a537de3) arg1=140730364262544
stap 28979 [000] 19424.333083: sdt_stap:pass2__end: (561b9a53a9e1) arg1=140730364262544
stap 29045 [006] 19424.933460: sdt_stap:pass2__start: (563edddcede3) arg1=140722674883152
stap 29045 [006] 19424.963794: sdt_stap:pass2__end: (563edddd19e1) arg1=140722674883152
# perf script | grep cycles | wc -l
39634
#
Looking at the whole perf.data file:
[root@quaco testsuite]# perf report | grep cycles:P -A25
# Samples: 39K of event 'cycles:P'
# Event count (approx.): 34044267368
#
# Overhead Command Shared Object Symbol
# ........ ....... .................... ................................
#
3.50% cc1 cc1 [.] ht_lookup_with_hash
3.04% cc1 cc1 [.] _cpp_lex_token
2.11% cc1 cc1 [.] ggc_internal_alloc
1.83% cc1 cc1 [.] cpp_get_token_with_location
1.68% cc1 libc-2.29.so [.] _int_malloc
1.41% cc1 cc1 [.] linemap_position_for_column
1.25% cc1 cc1 [.] ggc_internal_cleared_alloc
1.20% cc1 cc1 [.] c_lex_with_flags
1.18% cc1 cc1 [.] get_combined_adhoc_loc
1.05% cc1 libc-2.29.so [.] malloc
1.01% cc1 libc-2.29.so [.] _int_free
0.96% stap stap [.] std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Identity, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, stringtable_hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, true, true> >::_M_insert<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__detail::_AllocNode<std::allocator<std::__detail::_Hash_node<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, true> > > >
0.78% stap stap [.] lexer::scan
0.74% cc1 cc1 [.] _cpp_lex_direct
0.70% cc1 cc1 [.] pop_scope
0.70% cc1 cc1 [.] c_parser_declspecs
0.69% stap libc-2.29.so [.] _int_malloc
0.68% cc1 cc1 [.] htab_find_slot
0.68% cc1 [kernel.vmlinux] [k] prepare_exit_to_usermode
0.64% cc1 [kernel.vmlinux] [k] clear_page_erms
[root@quaco testsuite]#
And now only what happens in slices demarcated by those start/end SDT
events:
[root@quaco testsuite]# perf report --switch-on=sdt_stap:pass2__start --switch-off=sdt_stap:pass2__end | grep cycles:P -A100
# Samples: 240 of event 'cycles:P'
# Event count (approx.): 206491934
#
# Overhead Command Shared Object Symbol
# ........ ....... ................... ................................................
#
38.99% stap stap [.] systemtap_session::register_library_aliases
19.47% stap stap [.] match_key::operator<
15.01% stap libc-2.29.so [.] __memcmp_avx2_movbe
5.19% stap libc-2.29.so [.] _int_malloc
2.50% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_insert_and_rebalance
2.30% stap stap [.] match_node::build_no_more
2.07% stap libc-2.29.so [.] malloc
1.66% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::find
1.66% stap stap [.] match_node::bind
1.58% stap [kernel.vmlinux] [k] prepare_exit_to_usermode
1.17% stap [kernel.vmlinux] [k] native_irq_return_iret
0.87% stap stap [.] 0x0000000000032ec4
0.77% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_increment
0.47% stap stap [.] std::vector<derived_probe_builder*, std::allocator<derived_probe_builder*> >::_M_realloc_insert<derived_probe_builder* const&>
0.47% stap [kernel.vmlinux] [k] get_page_from_freelist
0.47% stap [kernel.vmlinux] [k] swapgs_restore_regs_and_return_to_usermode
0.47% stap [kernel.vmlinux] [k] do_user_addr_fault
0.46% stap [kernel.vmlinux] [k] __pagevec_lru_add_fn
0.46% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::_M_emplace_unique<std::pair<match_key, match_node*> >
0.42% stap libstdc++.so.6.0.26 [.] 0x00000000000c18fa
0.40% stap [kernel.vmlinux] [k] interrupt_entry
0.40% stap [kernel.vmlinux] [k] update_load_avg
0.40% stap [kernel.vmlinux] [k] __intel_pmu_disable_all
0.40% stap [kernel.vmlinux] [k] clear_page_erms
0.39% stap [kernel.vmlinux] [k] __mod_node_page_state
0.39% stap [kernel.vmlinux] [k] error_entry
0.39% stap [kernel.vmlinux] [k] sync_regs
0.38% stap [kernel.vmlinux] [k] __handle_mm_fault
0.38% stap stap [.] derive_probes
#
# (Tip: System-wide collection from all CPUs: perf record -a)
#
[root@quaco testsuite]#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: William Cohen <wcohen@redhat.com>
Link: https://lkml.kernel.org/n/tip-408hvumcnyn93a0auihnawew@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-15 21:18:58 +00:00
|
|
|
#include "util/evswitch.h"
|
2009-06-25 15:05:54 +00:00
|
|
|
#include "util/header.h"
|
2009-12-11 23:24:02 +00:00
|
|
|
#include "util/session.h"
|
2019-08-22 20:10:08 +00:00
|
|
|
#include "util/srcline.h"
|
2011-11-28 10:30:20 +00:00
|
|
|
#include "util/tool.h"
|
2009-05-26 07:17:18 +00:00
|
|
|
|
2015-12-15 15:39:39 +00:00
|
|
|
#include <subcmd/parse-options.h>
|
2016-01-09 10:16:29 +00:00
|
|
|
#include <subcmd/exec-cmd.h>
|
2009-05-26 07:17:18 +00:00
|
|
|
#include "util/parse-events.h"
|
|
|
|
|
2009-08-14 10:21:53 +00:00
|
|
|
#include "util/thread.h"
|
2009-09-24 16:02:49 +00:00
|
|
|
#include "util/sort.h"
|
2009-09-28 13:32:55 +00:00
|
|
|
#include "util/hist.h"
|
2013-10-15 14:27:32 +00:00
|
|
|
#include "util/data.h"
|
2012-10-15 23:33:38 +00:00
|
|
|
#include "arch/common.h"
|
perf report: Add option to specify time window of interest
Add option to allow user to control analysis window. e.g., collect data
for time window and analyze a segment of interest within that window.
Committer notes:
Testing it:
Using the perf.data file captured via 'perf kmem record':
# perf report --header-only
# ========
# captured on: Tue Nov 29 16:01:53 2016
# hostname : jouet
# os release : 4.8.8-300.fc25.x86_64
# perf version : 4.9.rc6.g5a6aca
# arch : x86_64
# nrcpus online : 4
# nrcpus avail : 4
# cpudesc : Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
# cpuid : GenuineIntel,6,61,4
# total memory : 20254660 kB
# cmdline : /home/acme/bin/perf kmem record usleep 1
# event : name = kmem:kmalloc, , id = { 931980, 931981, 931982, 931983 }, type = 2, size = 112, config = 0x1b9, { sample_period, sample_freq } = 1, sample_typ
# event : name = kmem:kmalloc_node, , id = { 931984, 931985, 931986, 931987 }, type = 2, size = 112, config = 0x1b7, { sample_period, sample_freq } = 1, sampl
# event : name = kmem:kfree, , id = { 931988, 931989, 931990, 931991 }, type = 2, size = 112, config = 0x1b5, { sample_period, sample_freq } = 1, sample_type
# event : name = kmem:kmem_cache_alloc, , id = { 931992, 931993, 931994, 931995 }, type = 2, size = 112, config = 0x1b8, { sample_period, sample_freq } = 1, s
# event : name = kmem:kmem_cache_alloc_node, , id = { 931996, 931997, 931998, 931999 }, type = 2, size = 112, config = 0x1b6, { sample_period, sample_freq } =
# event : name = kmem:kmem_cache_free, , id = { 932000, 932001, 932002, 932003 }, type = 2, size = 112, config = 0x1b4, { sample_period, sample_freq } = 1, sa
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, intel_pt = 7, intel_bts = 6, uncore_arb = 13, cstate_pkg = 15, breakpoint = 5, uncore_cbox_1 = 12, power = 9, software = 1, uncore_im
# HEADER_CACHE info available, use -I to display
# missing features: HEADER_BRANCH_STACK HEADER_GROUP_DESC HEADER_AUXTRACE HEADER_STAT
# ========
#
# # Looking at just the histogram entries for the first event:
#
# perf report | head -33
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 40 of event 'kmem:kmalloc'
# Event count (approx.): 40
#
# Overhead Trace output
# ........ ...............................................................................................................
#
37.50% call_site=ffffffffb91ad3c7 ptr=0xffff88895fc05000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
10.00% call_site=ffffffffb9258416 ptr=0xffff888a1dc61f00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
7.50% call_site=ffffffffb9258416 ptr=0xffff888a2640ac00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb92759ba ptr=0xffff888a26776000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9276864 ptr=0xffff8886f6b82600 bytes_req=136 bytes_alloc=192 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb9276903 ptr=0xffff888aefcf0460 bytes_req=32 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c98a00 bytes_req=392 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c9ba00 bytes_req=504 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad301 ptr=0xffff888a31747600 bytes_req=128 bytes_alloc=128 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad511 ptr=0xffff888a9d26a2a0 bytes_req=28 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c11a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c12c0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1540 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c16e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1c20 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff888a9d26a2a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931240 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931980 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931a00 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
#
# # And then limiting using the example for 'perf kmem stat --time' used
# # in the previous changeset committer note we see that there were no
# # kmem:kmalloc in that last part of the file, but there were some
# # kmem:kmem_cache_alloc ones:
#
# perf report --time 20119.782088, --stdio
#
# Total Lost Samples: 0
#
# Samples: 0 of event 'kmem:kmalloc'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kmalloc_node'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kfree'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 8 of event 'kmem:kmem_cache_alloc'
# Event count (approx.): 8
#
# Overhead Trace output
# ........ ..................................................................................................................
#
75.00% call_site=ffffffffb9333b42 ptr=0xffff888bdf1a39c0 bytes_req=48 bytes_alloc=48 gfp_flags=GFP_NOFS|__GFP_ZERO
12.50% call_site=ffffffffb90ad33a ptr=0xffff8889f071f6e0 bytes_req=160 bytes_alloc=160 gfp_flags=GFP_ATOMIC|__GFP_NOTRACK
12.50% call_site=ffffffffb9287cc1 ptr=0xffff8889b12722d8 bytes_req=104 bytes_alloc=104 gfp_flags=GFP_NOFS|__GFP_ZERO
#
Signed-off-by: David Ahern <dsahern@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480439746-42695-7-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-29 17:15:46 +00:00
|
|
|
#include "util/time-utils.h"
|
2015-04-24 19:29:45 +00:00
|
|
|
#include "util/auxtrace.h"
|
2017-04-19 19:05:56 +00:00
|
|
|
#include "util/units.h"
|
2019-09-03 13:56:06 +00:00
|
|
|
#include "util/util.h" // perf_tip()
|
2019-08-29 19:18:59 +00:00
|
|
|
#include "ui/ui.h"
|
2019-08-30 14:28:14 +00:00
|
|
|
#include "ui/progress.h"
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
#include "util/block-info.h"
|
2015-04-24 19:29:45 +00:00
|
|
|
|
2013-09-13 06:27:43 +00:00
|
|
|
#include <dlfcn.h>
|
2017-04-18 13:46:11 +00:00
|
|
|
#include <errno.h>
|
2017-04-17 18:23:08 +00:00
|
|
|
#include <inttypes.h>
|
2017-04-18 15:33:30 +00:00
|
|
|
#include <regex.h>
|
tools perf: Move from sane_ctype.h obtained from git to the Linux's original
We got the sane_ctype.h headers from git and kept using it so far, but
since that code originally came from the kernel sources to the git
sources, perhaps its better to just use the one in the kernel, so that
we can leverage tools/perf/check_headers.sh to be notified when our copy
gets out of sync, i.e. when fixes or goodies are added to the code we've
copied.
This will help with things like tools/lib/string.c where we want to have
more things in common with the kernel, such as strim(), skip_spaces(),
etc so as to go on removing the things that we have in tools/perf/util/
and instead using the code in the kernel, indirectly and removing things
like EXPORT_SYMBOL(), etc, getting notified when fixes and improvements
are made to the original code.
Hopefully this also should help with reducing the difference of code
hosted in tools/ to the one in the kernel proper.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-7k9868l713wqtgo01xxygn12@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-06-25 20:27:31 +00:00
|
|
|
#include <linux/ctype.h>
|
2017-04-19 18:49:18 +00:00
|
|
|
#include <signal.h>
|
2011-07-04 11:57:50 +00:00
|
|
|
#include <linux/bitmap.h>
|
2019-08-29 19:18:59 +00:00
|
|
|
#include <linux/string.h>
|
2016-03-23 18:16:55 +00:00
|
|
|
#include <linux/stringify.h>
|
2019-03-05 14:47:48 +00:00
|
|
|
#include <linux/time64.h>
|
2017-04-19 23:57:47 +00:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/stat.h>
|
|
|
|
#include <unistd.h>
|
2018-01-09 18:25:03 +00:00
|
|
|
#include <linux/mman.h>
|
2011-07-04 11:57:50 +00:00
|
|
|
|
perf build: Use libtraceevent from the system
Remove the LIBTRACEEVENT_DYNAMIC and LIBTRACEFS_DYNAMIC make command
line variables.
If libtraceevent isn't installed or NO_LIBTRACEEVENT=1 is passed to the
build, don't compile in libtraceevent and libtracefs support.
This also disables CONFIG_TRACE that controls "perf trace".
CONFIG_LIBTRACEEVENT is used to control enablement in Build/Makefiles,
HAVE_LIBTRACEEVENT is used in C code.
Without HAVE_LIBTRACEEVENT tracepoints are disabled and as such the
commands kmem, kwork, lock, sched and timechart are removed. The
majority of commands continue to work including "perf test".
Committer notes:
Fixed up a tools/perf/util/Build reject and added:
#include <traceevent/event-parse.h>
to tools/perf/util/scripting-engines/trace-event-perl.c.
Committer testing:
$ rpm -qi libtraceevent-devel
Name : libtraceevent-devel
Version : 1.5.3
Release : 2.fc36
Architecture: x86_64
Install Date: Mon 25 Jul 2022 03:20:19 PM -03
Group : Unspecified
Size : 27728
License : LGPLv2+ and GPLv2+
Signature : RSA/SHA256, Fri 15 Apr 2022 02:11:58 PM -03, Key ID 999f7cbf38ab71f4
Source RPM : libtraceevent-1.5.3-2.fc36.src.rpm
Build Date : Fri 15 Apr 2022 10:57:01 AM -03
Build Host : buildvm-x86-05.iad2.fedoraproject.org
Packager : Fedora Project
Vendor : Fedora Project
URL : https://git.kernel.org/pub/scm/libs/libtrace/libtraceevent.git/
Bug URL : https://bugz.fedoraproject.org/libtraceevent
Summary : Development headers of libtraceevent
Description :
Development headers of libtraceevent-libs
$
Default build:
$ ldd ~/bin/perf | grep tracee
libtraceevent.so.1 => /lib64/libtraceevent.so.1 (0x00007f1dcaf8f000)
$
# perf trace -e sched:* --max-events 10
0.000 migration/0/17 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, dest_cpu: 1)
0.005 migration/0/17 sched:sched_wake_idle_without_ipi(cpu: 1)
0.011 migration/0/17 sched:sched_switch(prev_comm: "", prev_pid: 17 (migration/0), prev_state: 1, next_comm: "", next_prio: 120)
1.173 :0/0 sched:sched_wakeup(comm: "", pid: 3138 (gnome-terminal-), prio: 120)
1.180 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 3138 (gnome-terminal-), next_prio: 120)
0.156 migration/1/21 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, orig_cpu: 1, dest_cpu: 2)
0.160 migration/1/21 sched:sched_wake_idle_without_ipi(cpu: 2)
0.166 migration/1/21 sched:sched_switch(prev_comm: "", prev_pid: 21 (migration/1), prev_state: 1, next_comm: "", next_prio: 120)
1.183 :0/0 sched:sched_wakeup(comm: "", pid: 1602985 (kworker/u16:0-f), prio: 120, target_cpu: 1)
1.186 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 1602985 (kworker/u16:0-f), next_prio: 120)
#
Had to tweak tools/perf/util/setup.py to make sure the python binding
shared object links with libtraceevent if -DHAVE_LIBTRACEEVENT is
present in CFLAGS.
Building with NO_LIBTRACEEVENT=1 uncovered some more build failures:
- Make building of data-convert-bt.c to CONFIG_LIBTRACEEVENT=y
- perf-$(CONFIG_LIBTRACEEVENT) += scripts/
- bpf_kwork.o needs also to be dependent on CONFIG_LIBTRACEEVENT=y
- The python binding needed some fixups and util/trace-event.c can't be
built and linked with the python binding shared object, so remove it
in tools/perf/util/setup.py and exclude it from the list of
dependencies in the python/perf.so Makefile.perf target.
Building without libtraceevent-devel installed uncovered more build
failures:
- The python binding tools/perf/util/python.c was assuming that
traceevent/parse-events.h was always available, which was the case
when we defaulted to using the in-kernel tools/lib/traceevent/ files,
now we need to enclose it under ifdef HAVE_LIBTRACEEVENT, just like
the other parts of it that deal with tracepoints.
- We have to ifdef the rules in the Build files with
CONFIG_LIBTRACEEVENT=y to build builtin-trace.c and
tools/perf/trace/beauty/ as we only ifdef setting CONFIG_TRACE=y when
setting NO_LIBTRACEEVENT=1 in the make command line, not when we don't
detect libtraceevent-devel installed in the system. Simplification here
to avoid these two ways of disabling builtin-trace.c and not having
CONFIG_TRACE=y when libtraceevent-devel isn't installed is the clean
way.
From Athira:
<quote>
tools/perf/arch/powerpc/util/Build
-perf-y += kvm-stat.o
+perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
</quote>
Then, ditto for arm64 and s390, detected by container cross build tests.
- s/390 uses test__checkevent_tracepoint() that is now only available if
HAVE_LIBTRACEEVENT is defined, enclose the callsite with ifder HAVE_LIBTRACEEVENT.
Also from Athira:
<quote>
With this change, I could successfully compile in these environment:
- Without libtraceevent-devel installed
- With libtraceevent-devel installed
- With “make NO_LIBTRACEEVENT=1”
</quote>
Then, finally rename CONFIG_TRACEEVENT to CONFIG_LIBTRACEEVENT for
consistency with other libraries detected in tools/perf/.
Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: bpf@vger.kernel.org
Link: http://lore.kernel.org/lkml/20221205225940.3079667-3-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-12-05 22:59:39 +00:00
|
|
|
#ifdef HAVE_LIBTRACEEVENT
|
|
|
|
#include <traceevent/event-parse.h>
|
|
|
|
#endif
|
|
|
|
|
2013-12-19 17:53:53 +00:00
|
|
|
struct report {
|
2011-11-28 10:30:20 +00:00
|
|
|
struct perf_tool tool;
|
2011-11-25 10:19:45 +00:00
|
|
|
struct perf_session *session;
|
perf report: Add --switch-on/--switch-off events
Since 'perf top' shares the histogram browser with 'perf report', then
the same explanation in the previous cset applies.
An additional example uses a pair of SDT events available for systemtap:
# perf probe --exec=/usr/bin/stap '%*:*'
Added new events:
sdt_stap:benchmark__thread__start (on %* in /usr/bin/stap)
sdt_stap:benchmark (on %* in /usr/bin/stap)
sdt_stap:benchmark__thread__end (on %* in /usr/bin/stap)
sdt_stap:pass6__start (on %* in /usr/bin/stap)
sdt_stap:pass6__end (on %* in /usr/bin/stap)
sdt_stap:pass5__start (on %* in /usr/bin/stap)
sdt_stap:pass5__end (on %* in /usr/bin/stap)
sdt_stap:pass0__start (on %* in /usr/bin/stap)
sdt_stap:pass0__end (on %* in /usr/bin/stap)
sdt_stap:pass1a__start (on %* in /usr/bin/stap)
sdt_stap:pass1b__start (on %* in /usr/bin/stap)
sdt_stap:pass1__end (on %* in /usr/bin/stap)
sdt_stap:pass2__start (on %* in /usr/bin/stap)
sdt_stap:pass2__end (on %* in /usr/bin/stap)
sdt_stap:pass3__start (on %* in /usr/bin/stap)
sdt_stap:pass3__end (on %* in /usr/bin/stap)
sdt_stap:pass4__start (on %* in /usr/bin/stap)
sdt_stap:pass4__end (on %* in /usr/bin/stap)
sdt_stap:benchmark__start (on %* in /usr/bin/stap)
sdt_stap:benchmark__end (on %* in /usr/bin/stap)
sdt_stap:cache__get (on %* in /usr/bin/stap)
sdt_stap:cache__clean (on %* in /usr/bin/stap)
sdt_stap:cache__add__module (on %* in /usr/bin/stap)
sdt_stap:cache__add__source (on %* in /usr/bin/stap)
sdt_stap:stap_system__complete (on %* in /usr/bin/stap)
sdt_stap:stap_system__start (on %* in /usr/bin/stap)
sdt_stap:stap_system__spawn (on %* in /usr/bin/stap)
sdt_stap:stap_system__fork (on %* in /usr/bin/stap)
sdt_stap:intern_string (on %* in /usr/bin/stap)
sdt_stap:client__start (on %* in /usr/bin/stap)
sdt_stap:client__end (on %* in /usr/bin/stap)
You can now use it in all perf tools, such as:
perf record -e sdt_stap:client__end -aR sleep 1
#
From these we're use the two below to run systemtap's test suite:
# perf record -e sdt_stap:pass2__*,cycles:P make installcheck > /dev/null
^C[ perf record: Woken up 8 times to write data ]
[ perf record: Captured and wrote 2.691 MB perf.data (39638 samples) ]
Terminated
# perf script | grep sdt_stap
stap 28979 [000] 19424.302660: sdt_stap:pass2__start: (561b9a537de3) arg1=140730364262544
stap 28979 [000] 19424.333083: sdt_stap:pass2__end: (561b9a53a9e1) arg1=140730364262544
stap 29045 [006] 19424.933460: sdt_stap:pass2__start: (563edddcede3) arg1=140722674883152
stap 29045 [006] 19424.963794: sdt_stap:pass2__end: (563edddd19e1) arg1=140722674883152
# perf script | grep cycles | wc -l
39634
#
Looking at the whole perf.data file:
[root@quaco testsuite]# perf report | grep cycles:P -A25
# Samples: 39K of event 'cycles:P'
# Event count (approx.): 34044267368
#
# Overhead Command Shared Object Symbol
# ........ ....... .................... ................................
#
3.50% cc1 cc1 [.] ht_lookup_with_hash
3.04% cc1 cc1 [.] _cpp_lex_token
2.11% cc1 cc1 [.] ggc_internal_alloc
1.83% cc1 cc1 [.] cpp_get_token_with_location
1.68% cc1 libc-2.29.so [.] _int_malloc
1.41% cc1 cc1 [.] linemap_position_for_column
1.25% cc1 cc1 [.] ggc_internal_cleared_alloc
1.20% cc1 cc1 [.] c_lex_with_flags
1.18% cc1 cc1 [.] get_combined_adhoc_loc
1.05% cc1 libc-2.29.so [.] malloc
1.01% cc1 libc-2.29.so [.] _int_free
0.96% stap stap [.] std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Identity, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, stringtable_hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, true, true> >::_M_insert<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__detail::_AllocNode<std::allocator<std::__detail::_Hash_node<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, true> > > >
0.78% stap stap [.] lexer::scan
0.74% cc1 cc1 [.] _cpp_lex_direct
0.70% cc1 cc1 [.] pop_scope
0.70% cc1 cc1 [.] c_parser_declspecs
0.69% stap libc-2.29.so [.] _int_malloc
0.68% cc1 cc1 [.] htab_find_slot
0.68% cc1 [kernel.vmlinux] [k] prepare_exit_to_usermode
0.64% cc1 [kernel.vmlinux] [k] clear_page_erms
[root@quaco testsuite]#
And now only what happens in slices demarcated by those start/end SDT
events:
[root@quaco testsuite]# perf report --switch-on=sdt_stap:pass2__start --switch-off=sdt_stap:pass2__end | grep cycles:P -A100
# Samples: 240 of event 'cycles:P'
# Event count (approx.): 206491934
#
# Overhead Command Shared Object Symbol
# ........ ....... ................... ................................................
#
38.99% stap stap [.] systemtap_session::register_library_aliases
19.47% stap stap [.] match_key::operator<
15.01% stap libc-2.29.so [.] __memcmp_avx2_movbe
5.19% stap libc-2.29.so [.] _int_malloc
2.50% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_insert_and_rebalance
2.30% stap stap [.] match_node::build_no_more
2.07% stap libc-2.29.so [.] malloc
1.66% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::find
1.66% stap stap [.] match_node::bind
1.58% stap [kernel.vmlinux] [k] prepare_exit_to_usermode
1.17% stap [kernel.vmlinux] [k] native_irq_return_iret
0.87% stap stap [.] 0x0000000000032ec4
0.77% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_increment
0.47% stap stap [.] std::vector<derived_probe_builder*, std::allocator<derived_probe_builder*> >::_M_realloc_insert<derived_probe_builder* const&>
0.47% stap [kernel.vmlinux] [k] get_page_from_freelist
0.47% stap [kernel.vmlinux] [k] swapgs_restore_regs_and_return_to_usermode
0.47% stap [kernel.vmlinux] [k] do_user_addr_fault
0.46% stap [kernel.vmlinux] [k] __pagevec_lru_add_fn
0.46% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::_M_emplace_unique<std::pair<match_key, match_node*> >
0.42% stap libstdc++.so.6.0.26 [.] 0x00000000000c18fa
0.40% stap [kernel.vmlinux] [k] interrupt_entry
0.40% stap [kernel.vmlinux] [k] update_load_avg
0.40% stap [kernel.vmlinux] [k] __intel_pmu_disable_all
0.40% stap [kernel.vmlinux] [k] clear_page_erms
0.39% stap [kernel.vmlinux] [k] __mod_node_page_state
0.39% stap [kernel.vmlinux] [k] error_entry
0.39% stap [kernel.vmlinux] [k] sync_regs
0.38% stap [kernel.vmlinux] [k] __handle_mm_fault
0.38% stap stap [.] derive_probes
#
# (Tip: System-wide collection from all CPUs: perf record -a)
#
[root@quaco testsuite]#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: William Cohen <wcohen@redhat.com>
Link: https://lkml.kernel.org/n/tip-408hvumcnyn93a0auihnawew@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-15 21:18:58 +00:00
|
|
|
struct evswitch evswitch;
|
2022-01-23 19:18:48 +00:00
|
|
|
#ifdef HAVE_SLANG_SUPPORT
|
|
|
|
bool use_tui;
|
|
|
|
#endif
|
2022-07-07 20:38:36 +00:00
|
|
|
#ifdef HAVE_GTK2_SUPPORT
|
2022-01-23 19:18:48 +00:00
|
|
|
bool use_gtk;
|
2022-07-07 20:38:36 +00:00
|
|
|
#endif
|
2022-01-23 19:18:48 +00:00
|
|
|
bool use_stdio;
|
2011-11-17 14:19:04 +00:00
|
|
|
bool show_full_info;
|
|
|
|
bool show_threads;
|
|
|
|
bool inverted_callchain;
|
2013-01-24 15:10:36 +00:00
|
|
|
bool mem_mode;
|
2018-01-07 16:03:55 +00:00
|
|
|
bool stats_mode;
|
2018-01-07 16:03:56 +00:00
|
|
|
bool tasks_mode;
|
2018-01-09 18:25:03 +00:00
|
|
|
bool mmaps_mode;
|
2013-12-09 10:02:49 +00:00
|
|
|
bool header;
|
|
|
|
bool header_only;
|
2015-07-18 15:24:47 +00:00
|
|
|
bool nonany_branch_mode;
|
2018-03-14 09:22:05 +00:00
|
|
|
bool group_set;
|
2020-03-19 20:25:13 +00:00
|
|
|
bool stitch_lbr;
|
2021-02-19 07:00:05 +00:00
|
|
|
bool disable_order;
|
2021-04-27 01:37:15 +00:00
|
|
|
bool skip_empty;
|
perf report: Add --max-stack option to limit callchain stack scan
When callgraph data was included in the perf data file, it may take a
long time to scan all those data and merge them together especially if
the stored callchains are long and the perf data file itself is large,
like a Gbyte or so.
The callchain stack is currently limited to PERF_MAX_STACK_DEPTH (127).
This is a large value. Usually the callgraph data that developers are
most interested in are the first few levels, the rests are usually not
looked at.
This patch adds a new --max-stack option to perf-report to limit the
depth of callchain stack data to look at to reduce the time it takes for
perf-report to finish its processing. It trades the presence of trailing
stack information with faster speed.
The following table shows the elapsed time of doing perf-report on a
perf.data file of size 985,531,828 bytes.
--max_stack Elapsed Time Output data size
----------- ------------ ----------------
not set 88.0s 124,422,651
64 87.5s 116,303,213
32 87.2s 112,023,804
16 86.6s 94,326,380
8 59.9s 33,697,248
4 40.7s 10,116,637
-g none 27.1s 2,555,810
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1382107129-2010-4-git-send-email-Waiman.Long@hp.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-18 14:38:48 +00:00
|
|
|
int max_stack;
|
2011-11-17 14:19:04 +00:00
|
|
|
struct perf_read_values show_threads_values;
|
2018-05-28 14:42:59 +00:00
|
|
|
struct annotation_options annotation_opts;
|
2011-11-17 14:19:04 +00:00
|
|
|
const char *pretty_printing_style;
|
|
|
|
const char *cpu_list;
|
2012-03-16 08:50:54 +00:00
|
|
|
const char *symbol_filter_str;
|
perf report: Add option to specify time window of interest
Add option to allow user to control analysis window. e.g., collect data
for time window and analyze a segment of interest within that window.
Committer notes:
Testing it:
Using the perf.data file captured via 'perf kmem record':
# perf report --header-only
# ========
# captured on: Tue Nov 29 16:01:53 2016
# hostname : jouet
# os release : 4.8.8-300.fc25.x86_64
# perf version : 4.9.rc6.g5a6aca
# arch : x86_64
# nrcpus online : 4
# nrcpus avail : 4
# cpudesc : Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
# cpuid : GenuineIntel,6,61,4
# total memory : 20254660 kB
# cmdline : /home/acme/bin/perf kmem record usleep 1
# event : name = kmem:kmalloc, , id = { 931980, 931981, 931982, 931983 }, type = 2, size = 112, config = 0x1b9, { sample_period, sample_freq } = 1, sample_typ
# event : name = kmem:kmalloc_node, , id = { 931984, 931985, 931986, 931987 }, type = 2, size = 112, config = 0x1b7, { sample_period, sample_freq } = 1, sampl
# event : name = kmem:kfree, , id = { 931988, 931989, 931990, 931991 }, type = 2, size = 112, config = 0x1b5, { sample_period, sample_freq } = 1, sample_type
# event : name = kmem:kmem_cache_alloc, , id = { 931992, 931993, 931994, 931995 }, type = 2, size = 112, config = 0x1b8, { sample_period, sample_freq } = 1, s
# event : name = kmem:kmem_cache_alloc_node, , id = { 931996, 931997, 931998, 931999 }, type = 2, size = 112, config = 0x1b6, { sample_period, sample_freq } =
# event : name = kmem:kmem_cache_free, , id = { 932000, 932001, 932002, 932003 }, type = 2, size = 112, config = 0x1b4, { sample_period, sample_freq } = 1, sa
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, intel_pt = 7, intel_bts = 6, uncore_arb = 13, cstate_pkg = 15, breakpoint = 5, uncore_cbox_1 = 12, power = 9, software = 1, uncore_im
# HEADER_CACHE info available, use -I to display
# missing features: HEADER_BRANCH_STACK HEADER_GROUP_DESC HEADER_AUXTRACE HEADER_STAT
# ========
#
# # Looking at just the histogram entries for the first event:
#
# perf report | head -33
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 40 of event 'kmem:kmalloc'
# Event count (approx.): 40
#
# Overhead Trace output
# ........ ...............................................................................................................
#
37.50% call_site=ffffffffb91ad3c7 ptr=0xffff88895fc05000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
10.00% call_site=ffffffffb9258416 ptr=0xffff888a1dc61f00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
7.50% call_site=ffffffffb9258416 ptr=0xffff888a2640ac00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb92759ba ptr=0xffff888a26776000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9276864 ptr=0xffff8886f6b82600 bytes_req=136 bytes_alloc=192 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb9276903 ptr=0xffff888aefcf0460 bytes_req=32 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c98a00 bytes_req=392 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c9ba00 bytes_req=504 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad301 ptr=0xffff888a31747600 bytes_req=128 bytes_alloc=128 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad511 ptr=0xffff888a9d26a2a0 bytes_req=28 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c11a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c12c0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1540 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c16e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1c20 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff888a9d26a2a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931240 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931980 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931a00 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
#
# # And then limiting using the example for 'perf kmem stat --time' used
# # in the previous changeset committer note we see that there were no
# # kmem:kmalloc in that last part of the file, but there were some
# # kmem:kmem_cache_alloc ones:
#
# perf report --time 20119.782088, --stdio
#
# Total Lost Samples: 0
#
# Samples: 0 of event 'kmem:kmalloc'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kmalloc_node'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kfree'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 8 of event 'kmem:kmem_cache_alloc'
# Event count (approx.): 8
#
# Overhead Trace output
# ........ ..................................................................................................................
#
75.00% call_site=ffffffffb9333b42 ptr=0xffff888bdf1a39c0 bytes_req=48 bytes_alloc=48 gfp_flags=GFP_NOFS|__GFP_ZERO
12.50% call_site=ffffffffb90ad33a ptr=0xffff8889f071f6e0 bytes_req=160 bytes_alloc=160 gfp_flags=GFP_ATOMIC|__GFP_NOTRACK
12.50% call_site=ffffffffb9287cc1 ptr=0xffff8889b12722d8 bytes_req=104 bytes_alloc=104 gfp_flags=GFP_NOFS|__GFP_ZERO
#
Signed-off-by: David Ahern <dsahern@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480439746-42695-7-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-29 17:15:46 +00:00
|
|
|
const char *time_str;
|
perf report: Remove the time slices number limitation
Previously it was only allowed to use at most 10 time slices in 'perf
report --time'.
This patch removes this limitation.
For example, following command line is OK (12 time slices)
perf report --stdio --time 1%/1,1%/2,1%/3,1%/4,1%/5,1%/6,1%/7,1%/8,1%/9,1%/10,1%/11,1%/12
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Suggested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1515596433-24653-8-git-send-email-yao.jin@linux.intel.com
[ No need to check for NULL to call free, use zfree ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-10 15:00:32 +00:00
|
|
|
struct perf_time_interval *ptime_range;
|
|
|
|
int range_size;
|
2017-12-08 13:13:45 +00:00
|
|
|
int range_num;
|
2013-05-14 02:09:04 +00:00
|
|
|
float min_percent;
|
2014-04-22 00:47:25 +00:00
|
|
|
u64 nr_entries;
|
2014-06-05 09:00:20 +00:00
|
|
|
u64 queue_size;
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
u64 total_cycles;
|
2015-09-04 14:45:44 +00:00
|
|
|
int socket_filter;
|
2011-11-17 14:19:04 +00:00
|
|
|
DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
|
2017-07-18 12:13:14 +00:00
|
|
|
struct branch_type_stat brtype_stat;
|
2018-11-30 13:54:56 +00:00
|
|
|
bool symbol_ipc;
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
bool total_cycles_mode;
|
|
|
|
struct block_report *block_reports;
|
2020-02-02 14:16:54 +00:00
|
|
|
int nr_block_reports;
|
2011-11-25 10:19:45 +00:00
|
|
|
};
|
2011-07-04 11:57:50 +00:00
|
|
|
|
2013-12-19 17:53:53 +00:00
|
|
|
static int report__config(const char *var, const char *value, void *cb)
|
2013-01-22 09:09:46 +00:00
|
|
|
{
|
2014-06-05 09:00:20 +00:00
|
|
|
struct report *rep = cb;
|
|
|
|
|
2013-01-22 09:09:46 +00:00
|
|
|
if (!strcmp(var, "report.group")) {
|
|
|
|
symbol_conf.event_group = perf_config_bool(var, value);
|
|
|
|
return 0;
|
|
|
|
}
|
2013-05-14 02:09:06 +00:00
|
|
|
if (!strcmp(var, "report.percent-limit")) {
|
2016-01-27 15:40:50 +00:00
|
|
|
double pcnt = strtof(value, NULL);
|
|
|
|
|
|
|
|
rep->min_percent = pcnt;
|
|
|
|
callchain_param.min_percent = pcnt;
|
2013-05-14 02:09:06 +00:00
|
|
|
return 0;
|
|
|
|
}
|
2013-01-22 09:09:46 +00:00
|
|
|
if (!strcmp(var, "report.children")) {
|
|
|
|
symbol_conf.cumulate_callchain = perf_config_bool(var, value);
|
|
|
|
return 0;
|
|
|
|
}
|
2017-06-27 14:44:58 +00:00
|
|
|
if (!strcmp(var, "report.queue-size"))
|
|
|
|
return perf_config_u64(&rep->queue_size, var, value);
|
|
|
|
|
2016-08-12 23:41:01 +00:00
|
|
|
if (!strcmp(var, "report.sort_order")) {
|
|
|
|
default_sort_order = strdup(value);
|
|
|
|
return 0;
|
|
|
|
}
|
2013-01-22 09:09:46 +00:00
|
|
|
|
2021-04-27 01:37:16 +00:00
|
|
|
if (!strcmp(var, "report.skip-empty")) {
|
|
|
|
rep->skip_empty = perf_config_bool(var, value);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-02-26 09:31:51 +00:00
|
|
|
return 0;
|
2013-01-22 09:09:46 +00:00
|
|
|
}
|
|
|
|
|
2014-01-07 08:02:25 +00:00
|
|
|
static int hist_iter__report_callback(struct hist_entry_iter *iter,
|
|
|
|
struct addr_location *al, bool single,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
int err = 0;
|
|
|
|
struct report *rep = arg;
|
|
|
|
struct hist_entry *he = iter->he;
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *evsel = iter->evsel;
|
2017-07-20 19:28:53 +00:00
|
|
|
struct perf_sample *sample = iter->sample;
|
2014-01-07 08:02:25 +00:00
|
|
|
struct mem_info *mi;
|
|
|
|
struct branch_info *bi;
|
|
|
|
|
2018-11-30 13:54:56 +00:00
|
|
|
if (!ui__has_annotation() && !rep->symbol_ipc)
|
2014-01-07 08:02:25 +00:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (sort__mode == SORT_MODE__BRANCH) {
|
|
|
|
bi = he->branch_info;
|
2018-05-24 15:05:39 +00:00
|
|
|
err = addr_map_symbol__inc_samples(&bi->from, sample, evsel);
|
2014-01-07 08:02:25 +00:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2018-05-24 15:05:39 +00:00
|
|
|
err = addr_map_symbol__inc_samples(&bi->to, sample, evsel);
|
2014-01-07 08:02:25 +00:00
|
|
|
|
|
|
|
} else if (rep->mem_mode) {
|
|
|
|
mi = he->mem_info;
|
2018-05-24 15:05:39 +00:00
|
|
|
err = addr_map_symbol__inc_samples(&mi->daddr, sample, evsel);
|
2014-01-07 08:02:25 +00:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2018-05-24 15:05:39 +00:00
|
|
|
err = hist_entry__inc_addr_samples(he, sample, evsel, al->addr);
|
2014-01-07 08:02:25 +00:00
|
|
|
|
|
|
|
} else if (symbol_conf.cumulate_callchain) {
|
|
|
|
if (single)
|
2018-05-24 15:05:39 +00:00
|
|
|
err = hist_entry__inc_addr_samples(he, sample, evsel, al->addr);
|
2014-01-07 08:02:25 +00:00
|
|
|
} else {
|
2018-05-24 15:05:39 +00:00
|
|
|
err = hist_entry__inc_addr_samples(he, sample, evsel, al->addr);
|
2014-01-07 08:02:25 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
return err;
|
2013-01-24 15:10:36 +00:00
|
|
|
}
|
|
|
|
|
2017-07-18 12:13:14 +00:00
|
|
|
static int hist_iter__branch_callback(struct hist_entry_iter *iter,
|
|
|
|
struct addr_location *al __maybe_unused,
|
|
|
|
bool single __maybe_unused,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
struct hist_entry *he = iter->he;
|
|
|
|
struct report *rep = arg;
|
2020-03-13 13:46:07 +00:00
|
|
|
struct branch_info *bi = he->branch_info;
|
2017-12-26 10:42:43 +00:00
|
|
|
struct perf_sample *sample = iter->sample;
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *evsel = iter->evsel;
|
2017-12-26 10:42:43 +00:00
|
|
|
int err;
|
|
|
|
|
2020-03-13 13:46:07 +00:00
|
|
|
branch_type_count(&rep->brtype_stat, &bi->flags,
|
|
|
|
bi->from.addr, bi->to.addr);
|
|
|
|
|
2018-11-30 13:54:56 +00:00
|
|
|
if (!ui__has_annotation() && !rep->symbol_ipc)
|
2017-12-26 10:42:43 +00:00
|
|
|
return 0;
|
|
|
|
|
2018-05-24 15:05:39 +00:00
|
|
|
err = addr_map_symbol__inc_samples(&bi->from, sample, evsel);
|
2017-12-26 10:42:43 +00:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2018-05-24 15:05:39 +00:00
|
|
|
err = addr_map_symbol__inc_samples(&bi->to, sample, evsel);
|
2017-12-26 10:42:43 +00:00
|
|
|
|
|
|
|
out:
|
|
|
|
return err;
|
2017-07-18 12:13:14 +00:00
|
|
|
}
|
|
|
|
|
2018-03-14 09:22:05 +00:00
|
|
|
static void setup_forced_leader(struct report *report,
|
2019-07-21 11:23:52 +00:00
|
|
|
struct evlist *evlist)
|
2018-03-14 09:22:05 +00:00
|
|
|
{
|
2018-05-21 14:57:45 +00:00
|
|
|
if (report->group_set)
|
2020-11-30 17:58:32 +00:00
|
|
|
evlist__force_leader(evlist);
|
2018-03-14 09:22:05 +00:00
|
|
|
}
|
|
|
|
|
2018-09-13 12:54:03 +00:00
|
|
|
static int process_feature_event(struct perf_session *session,
|
|
|
|
union perf_event *event)
|
2018-03-14 09:22:05 +00:00
|
|
|
{
|
2018-09-13 12:54:03 +00:00
|
|
|
struct report *rep = container_of(session->tool, struct report, tool);
|
2018-03-14 09:22:05 +00:00
|
|
|
|
|
|
|
if (event->feat.feat_id < HEADER_LAST_FEATURE)
|
2018-09-13 12:54:03 +00:00
|
|
|
return perf_event__process_feature(session, event);
|
2018-03-14 09:22:05 +00:00
|
|
|
|
|
|
|
if (event->feat.feat_id != HEADER_LAST_FEATURE) {
|
2019-08-28 13:57:13 +00:00
|
|
|
pr_err("failed: wrong feature ID: %" PRI_lu64 "\n",
|
2018-03-14 09:22:05 +00:00
|
|
|
event->feat.feat_id);
|
|
|
|
return -1;
|
perf report: Support --header-only for pipe mode
The --header-only checks file header and prints the feature data. But
as pipe mode doesn't have it in the header it prints almost nothing.
Change it to process first few records until it founds HEADER_FEATURE.
Before:
$ perf record -o- true | perf report -i- --header-only
# ========
# captured on : Thu Dec 10 14:34:59 2020
# header version : 1
# data offset : 0
# data size : 0
# feat offset : 0
# ========
#
After:
$ perf record -o- true | perf report -i- --header-only
# ========
# captured on : Thu Dec 10 14:49:11 2020
# header version : 1
# data offset : 0
# data size : 0
# feat offset : 0
# ========
#
# hostname : balhae
# os release : 5.7.17-1xxx
# perf version : 5.10.rc6.gdb0ea13cc741
# arch : x86_64
# nrcpus online : 8
# nrcpus avail : 8
# cpudesc : Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz
# cpuid : GenuineIntel,6,142,12
# total memory : 16158916 kB
# cmdline : perf record -o- true
# event : name = cycles, , id = { 81, 82, 83, 84, 85, 86, 87, 88 }, size = 120, ...
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: intel_pt = 9, intel_bts = 8, software = 1, power = 20, uprobe = 7, ...
# time of first sample : 0.000000
# time of last sample : 0.000000
# sample duration : 0.000 ms
# MEM_TOPOLOGY info available, use -I to display
# cpu pmu capabilities: branches=32, max_precise=3, pmu_name=skylake
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: https://lore.kernel.org/r/20201210061302.88213-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-10 06:13:01 +00:00
|
|
|
} else if (rep->header_only) {
|
|
|
|
session_done = 1;
|
2018-03-14 09:22:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2018-06-25 12:42:20 +00:00
|
|
|
* (feat_id = HEADER_LAST_FEATURE) is the end marker which
|
|
|
|
* means all features are received, now we can force the
|
2018-03-14 09:22:05 +00:00
|
|
|
* group if needed.
|
|
|
|
*/
|
|
|
|
setup_forced_leader(rep, session->evlist);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-11-28 10:30:20 +00:00
|
|
|
static int process_sample_event(struct perf_tool *tool,
|
2011-11-25 10:19:45 +00:00
|
|
|
union perf_event *event,
|
2011-01-29 16:01:45 +00:00
|
|
|
struct perf_sample *sample,
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *evsel,
|
2011-11-28 09:56:39 +00:00
|
|
|
struct machine *machine)
|
2009-06-03 21:14:49 +00:00
|
|
|
{
|
2013-12-19 17:53:53 +00:00
|
|
|
struct report *rep = container_of(tool, struct report, tool);
|
perf tools: Consolidate symbol resolving across all tools
Now we have a very high level routine for simple tools to
process IP sample events:
int event__preprocess_sample(const event_t *self,
struct addr_location *al,
symbol_filter_t filter)
It receives the event itself and will insert new threads in the
global threads list and resolve the map and symbol, filling all
this info into the new addr_location struct, so that tools like
annotate and report can further process the event by creating
hist_entries in their specific way (with or without callgraphs,
etc).
It in turn uses the new next layer function:
void thread__find_addr_location(struct thread *self, u8 cpumode,
enum map_type type, u64 addr,
struct addr_location *al,
symbol_filter_t filter)
This one will, given a thread (userspace or the kernel kthread
one), will find the given type (MAP__FUNCTION now, MAP__VARIABLE
too in the near future) at the given cpumode, taking vdsos into
account (userspace hit, but kernel symbol) and will fill all
these details in the addr_location given.
Tools that need a more compact API for plain function
resolution, like 'kmem', can use this other one:
struct symbol *thread__find_function(struct thread *self, u64 addr,
symbol_filter_t filter)
So, to resolve a kernel symbol, that is all the 'kmem' tool
needs, its just a matter of calling:
sym = thread__find_function(kthread, addr, NULL);
The 'filter' parameter is needed because we do lazy
parsing/loading of ELF symtabs or /proc/kallsyms.
With this we remove more code duplication all around, which is
always good, huh? :-)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: John Kacur <jkacur@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1259346563-12568-12-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-27 18:29:23 +00:00
|
|
|
struct addr_location al;
|
2013-10-30 00:40:34 +00:00
|
|
|
struct hist_entry_iter iter = {
|
2015-05-19 08:04:10 +00:00
|
|
|
.evsel = evsel,
|
|
|
|
.sample = sample,
|
2015-11-26 07:08:20 +00:00
|
|
|
.hide_unresolved = symbol_conf.hide_unresolved,
|
2015-05-19 08:04:10 +00:00
|
|
|
.add_entry_cb = hist_iter__report_callback,
|
2013-10-30 00:40:34 +00:00
|
|
|
};
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-06 23:43:22 +00:00
|
|
|
int ret = 0;
|
2009-12-06 11:08:24 +00:00
|
|
|
|
2017-12-08 13:13:45 +00:00
|
|
|
if (perf_time__ranges_skip_sample(rep->ptime_range, rep->range_num,
|
|
|
|
sample->time)) {
|
perf report: Add option to specify time window of interest
Add option to allow user to control analysis window. e.g., collect data
for time window and analyze a segment of interest within that window.
Committer notes:
Testing it:
Using the perf.data file captured via 'perf kmem record':
# perf report --header-only
# ========
# captured on: Tue Nov 29 16:01:53 2016
# hostname : jouet
# os release : 4.8.8-300.fc25.x86_64
# perf version : 4.9.rc6.g5a6aca
# arch : x86_64
# nrcpus online : 4
# nrcpus avail : 4
# cpudesc : Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
# cpuid : GenuineIntel,6,61,4
# total memory : 20254660 kB
# cmdline : /home/acme/bin/perf kmem record usleep 1
# event : name = kmem:kmalloc, , id = { 931980, 931981, 931982, 931983 }, type = 2, size = 112, config = 0x1b9, { sample_period, sample_freq } = 1, sample_typ
# event : name = kmem:kmalloc_node, , id = { 931984, 931985, 931986, 931987 }, type = 2, size = 112, config = 0x1b7, { sample_period, sample_freq } = 1, sampl
# event : name = kmem:kfree, , id = { 931988, 931989, 931990, 931991 }, type = 2, size = 112, config = 0x1b5, { sample_period, sample_freq } = 1, sample_type
# event : name = kmem:kmem_cache_alloc, , id = { 931992, 931993, 931994, 931995 }, type = 2, size = 112, config = 0x1b8, { sample_period, sample_freq } = 1, s
# event : name = kmem:kmem_cache_alloc_node, , id = { 931996, 931997, 931998, 931999 }, type = 2, size = 112, config = 0x1b6, { sample_period, sample_freq } =
# event : name = kmem:kmem_cache_free, , id = { 932000, 932001, 932002, 932003 }, type = 2, size = 112, config = 0x1b4, { sample_period, sample_freq } = 1, sa
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, intel_pt = 7, intel_bts = 6, uncore_arb = 13, cstate_pkg = 15, breakpoint = 5, uncore_cbox_1 = 12, power = 9, software = 1, uncore_im
# HEADER_CACHE info available, use -I to display
# missing features: HEADER_BRANCH_STACK HEADER_GROUP_DESC HEADER_AUXTRACE HEADER_STAT
# ========
#
# # Looking at just the histogram entries for the first event:
#
# perf report | head -33
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 40 of event 'kmem:kmalloc'
# Event count (approx.): 40
#
# Overhead Trace output
# ........ ...............................................................................................................
#
37.50% call_site=ffffffffb91ad3c7 ptr=0xffff88895fc05000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
10.00% call_site=ffffffffb9258416 ptr=0xffff888a1dc61f00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
7.50% call_site=ffffffffb9258416 ptr=0xffff888a2640ac00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb92759ba ptr=0xffff888a26776000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9276864 ptr=0xffff8886f6b82600 bytes_req=136 bytes_alloc=192 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb9276903 ptr=0xffff888aefcf0460 bytes_req=32 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c98a00 bytes_req=392 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c9ba00 bytes_req=504 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad301 ptr=0xffff888a31747600 bytes_req=128 bytes_alloc=128 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad511 ptr=0xffff888a9d26a2a0 bytes_req=28 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c11a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c12c0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1540 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c16e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1c20 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff888a9d26a2a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931240 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931980 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931a00 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
#
# # And then limiting using the example for 'perf kmem stat --time' used
# # in the previous changeset committer note we see that there were no
# # kmem:kmalloc in that last part of the file, but there were some
# # kmem:kmem_cache_alloc ones:
#
# perf report --time 20119.782088, --stdio
#
# Total Lost Samples: 0
#
# Samples: 0 of event 'kmem:kmalloc'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kmalloc_node'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kfree'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 8 of event 'kmem:kmem_cache_alloc'
# Event count (approx.): 8
#
# Overhead Trace output
# ........ ..................................................................................................................
#
75.00% call_site=ffffffffb9333b42 ptr=0xffff888bdf1a39c0 bytes_req=48 bytes_alloc=48 gfp_flags=GFP_NOFS|__GFP_ZERO
12.50% call_site=ffffffffb90ad33a ptr=0xffff8889f071f6e0 bytes_req=160 bytes_alloc=160 gfp_flags=GFP_ATOMIC|__GFP_NOTRACK
12.50% call_site=ffffffffb9287cc1 ptr=0xffff8889b12722d8 bytes_req=104 bytes_alloc=104 gfp_flags=GFP_NOFS|__GFP_ZERO
#
Signed-off-by: David Ahern <dsahern@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480439746-42695-7-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-29 17:15:46 +00:00
|
|
|
return 0;
|
2017-12-08 13:13:45 +00:00
|
|
|
}
|
perf report: Add option to specify time window of interest
Add option to allow user to control analysis window. e.g., collect data
for time window and analyze a segment of interest within that window.
Committer notes:
Testing it:
Using the perf.data file captured via 'perf kmem record':
# perf report --header-only
# ========
# captured on: Tue Nov 29 16:01:53 2016
# hostname : jouet
# os release : 4.8.8-300.fc25.x86_64
# perf version : 4.9.rc6.g5a6aca
# arch : x86_64
# nrcpus online : 4
# nrcpus avail : 4
# cpudesc : Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
# cpuid : GenuineIntel,6,61,4
# total memory : 20254660 kB
# cmdline : /home/acme/bin/perf kmem record usleep 1
# event : name = kmem:kmalloc, , id = { 931980, 931981, 931982, 931983 }, type = 2, size = 112, config = 0x1b9, { sample_period, sample_freq } = 1, sample_typ
# event : name = kmem:kmalloc_node, , id = { 931984, 931985, 931986, 931987 }, type = 2, size = 112, config = 0x1b7, { sample_period, sample_freq } = 1, sampl
# event : name = kmem:kfree, , id = { 931988, 931989, 931990, 931991 }, type = 2, size = 112, config = 0x1b5, { sample_period, sample_freq } = 1, sample_type
# event : name = kmem:kmem_cache_alloc, , id = { 931992, 931993, 931994, 931995 }, type = 2, size = 112, config = 0x1b8, { sample_period, sample_freq } = 1, s
# event : name = kmem:kmem_cache_alloc_node, , id = { 931996, 931997, 931998, 931999 }, type = 2, size = 112, config = 0x1b6, { sample_period, sample_freq } =
# event : name = kmem:kmem_cache_free, , id = { 932000, 932001, 932002, 932003 }, type = 2, size = 112, config = 0x1b4, { sample_period, sample_freq } = 1, sa
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, intel_pt = 7, intel_bts = 6, uncore_arb = 13, cstate_pkg = 15, breakpoint = 5, uncore_cbox_1 = 12, power = 9, software = 1, uncore_im
# HEADER_CACHE info available, use -I to display
# missing features: HEADER_BRANCH_STACK HEADER_GROUP_DESC HEADER_AUXTRACE HEADER_STAT
# ========
#
# # Looking at just the histogram entries for the first event:
#
# perf report | head -33
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 40 of event 'kmem:kmalloc'
# Event count (approx.): 40
#
# Overhead Trace output
# ........ ...............................................................................................................
#
37.50% call_site=ffffffffb91ad3c7 ptr=0xffff88895fc05000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
10.00% call_site=ffffffffb9258416 ptr=0xffff888a1dc61f00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
7.50% call_site=ffffffffb9258416 ptr=0xffff888a2640ac00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb92759ba ptr=0xffff888a26776000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9276864 ptr=0xffff8886f6b82600 bytes_req=136 bytes_alloc=192 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb9276903 ptr=0xffff888aefcf0460 bytes_req=32 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c98a00 bytes_req=392 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c9ba00 bytes_req=504 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad301 ptr=0xffff888a31747600 bytes_req=128 bytes_alloc=128 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad511 ptr=0xffff888a9d26a2a0 bytes_req=28 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c11a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c12c0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1540 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c16e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1c20 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff888a9d26a2a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931240 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931980 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931a00 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
#
# # And then limiting using the example for 'perf kmem stat --time' used
# # in the previous changeset committer note we see that there were no
# # kmem:kmalloc in that last part of the file, but there were some
# # kmem:kmem_cache_alloc ones:
#
# perf report --time 20119.782088, --stdio
#
# Total Lost Samples: 0
#
# Samples: 0 of event 'kmem:kmalloc'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kmalloc_node'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kfree'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 8 of event 'kmem:kmem_cache_alloc'
# Event count (approx.): 8
#
# Overhead Trace output
# ........ ..................................................................................................................
#
75.00% call_site=ffffffffb9333b42 ptr=0xffff888bdf1a39c0 bytes_req=48 bytes_alloc=48 gfp_flags=GFP_NOFS|__GFP_ZERO
12.50% call_site=ffffffffb90ad33a ptr=0xffff8889f071f6e0 bytes_req=160 bytes_alloc=160 gfp_flags=GFP_ATOMIC|__GFP_NOTRACK
12.50% call_site=ffffffffb9287cc1 ptr=0xffff8889b12722d8 bytes_req=104 bytes_alloc=104 gfp_flags=GFP_NOFS|__GFP_ZERO
#
Signed-off-by: David Ahern <dsahern@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480439746-42695-7-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-29 17:15:46 +00:00
|
|
|
|
perf report: Add --switch-on/--switch-off events
Since 'perf top' shares the histogram browser with 'perf report', then
the same explanation in the previous cset applies.
An additional example uses a pair of SDT events available for systemtap:
# perf probe --exec=/usr/bin/stap '%*:*'
Added new events:
sdt_stap:benchmark__thread__start (on %* in /usr/bin/stap)
sdt_stap:benchmark (on %* in /usr/bin/stap)
sdt_stap:benchmark__thread__end (on %* in /usr/bin/stap)
sdt_stap:pass6__start (on %* in /usr/bin/stap)
sdt_stap:pass6__end (on %* in /usr/bin/stap)
sdt_stap:pass5__start (on %* in /usr/bin/stap)
sdt_stap:pass5__end (on %* in /usr/bin/stap)
sdt_stap:pass0__start (on %* in /usr/bin/stap)
sdt_stap:pass0__end (on %* in /usr/bin/stap)
sdt_stap:pass1a__start (on %* in /usr/bin/stap)
sdt_stap:pass1b__start (on %* in /usr/bin/stap)
sdt_stap:pass1__end (on %* in /usr/bin/stap)
sdt_stap:pass2__start (on %* in /usr/bin/stap)
sdt_stap:pass2__end (on %* in /usr/bin/stap)
sdt_stap:pass3__start (on %* in /usr/bin/stap)
sdt_stap:pass3__end (on %* in /usr/bin/stap)
sdt_stap:pass4__start (on %* in /usr/bin/stap)
sdt_stap:pass4__end (on %* in /usr/bin/stap)
sdt_stap:benchmark__start (on %* in /usr/bin/stap)
sdt_stap:benchmark__end (on %* in /usr/bin/stap)
sdt_stap:cache__get (on %* in /usr/bin/stap)
sdt_stap:cache__clean (on %* in /usr/bin/stap)
sdt_stap:cache__add__module (on %* in /usr/bin/stap)
sdt_stap:cache__add__source (on %* in /usr/bin/stap)
sdt_stap:stap_system__complete (on %* in /usr/bin/stap)
sdt_stap:stap_system__start (on %* in /usr/bin/stap)
sdt_stap:stap_system__spawn (on %* in /usr/bin/stap)
sdt_stap:stap_system__fork (on %* in /usr/bin/stap)
sdt_stap:intern_string (on %* in /usr/bin/stap)
sdt_stap:client__start (on %* in /usr/bin/stap)
sdt_stap:client__end (on %* in /usr/bin/stap)
You can now use it in all perf tools, such as:
perf record -e sdt_stap:client__end -aR sleep 1
#
From these we're use the two below to run systemtap's test suite:
# perf record -e sdt_stap:pass2__*,cycles:P make installcheck > /dev/null
^C[ perf record: Woken up 8 times to write data ]
[ perf record: Captured and wrote 2.691 MB perf.data (39638 samples) ]
Terminated
# perf script | grep sdt_stap
stap 28979 [000] 19424.302660: sdt_stap:pass2__start: (561b9a537de3) arg1=140730364262544
stap 28979 [000] 19424.333083: sdt_stap:pass2__end: (561b9a53a9e1) arg1=140730364262544
stap 29045 [006] 19424.933460: sdt_stap:pass2__start: (563edddcede3) arg1=140722674883152
stap 29045 [006] 19424.963794: sdt_stap:pass2__end: (563edddd19e1) arg1=140722674883152
# perf script | grep cycles | wc -l
39634
#
Looking at the whole perf.data file:
[root@quaco testsuite]# perf report | grep cycles:P -A25
# Samples: 39K of event 'cycles:P'
# Event count (approx.): 34044267368
#
# Overhead Command Shared Object Symbol
# ........ ....... .................... ................................
#
3.50% cc1 cc1 [.] ht_lookup_with_hash
3.04% cc1 cc1 [.] _cpp_lex_token
2.11% cc1 cc1 [.] ggc_internal_alloc
1.83% cc1 cc1 [.] cpp_get_token_with_location
1.68% cc1 libc-2.29.so [.] _int_malloc
1.41% cc1 cc1 [.] linemap_position_for_column
1.25% cc1 cc1 [.] ggc_internal_cleared_alloc
1.20% cc1 cc1 [.] c_lex_with_flags
1.18% cc1 cc1 [.] get_combined_adhoc_loc
1.05% cc1 libc-2.29.so [.] malloc
1.01% cc1 libc-2.29.so [.] _int_free
0.96% stap stap [.] std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Identity, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, stringtable_hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, true, true> >::_M_insert<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__detail::_AllocNode<std::allocator<std::__detail::_Hash_node<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, true> > > >
0.78% stap stap [.] lexer::scan
0.74% cc1 cc1 [.] _cpp_lex_direct
0.70% cc1 cc1 [.] pop_scope
0.70% cc1 cc1 [.] c_parser_declspecs
0.69% stap libc-2.29.so [.] _int_malloc
0.68% cc1 cc1 [.] htab_find_slot
0.68% cc1 [kernel.vmlinux] [k] prepare_exit_to_usermode
0.64% cc1 [kernel.vmlinux] [k] clear_page_erms
[root@quaco testsuite]#
And now only what happens in slices demarcated by those start/end SDT
events:
[root@quaco testsuite]# perf report --switch-on=sdt_stap:pass2__start --switch-off=sdt_stap:pass2__end | grep cycles:P -A100
# Samples: 240 of event 'cycles:P'
# Event count (approx.): 206491934
#
# Overhead Command Shared Object Symbol
# ........ ....... ................... ................................................
#
38.99% stap stap [.] systemtap_session::register_library_aliases
19.47% stap stap [.] match_key::operator<
15.01% stap libc-2.29.so [.] __memcmp_avx2_movbe
5.19% stap libc-2.29.so [.] _int_malloc
2.50% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_insert_and_rebalance
2.30% stap stap [.] match_node::build_no_more
2.07% stap libc-2.29.so [.] malloc
1.66% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::find
1.66% stap stap [.] match_node::bind
1.58% stap [kernel.vmlinux] [k] prepare_exit_to_usermode
1.17% stap [kernel.vmlinux] [k] native_irq_return_iret
0.87% stap stap [.] 0x0000000000032ec4
0.77% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_increment
0.47% stap stap [.] std::vector<derived_probe_builder*, std::allocator<derived_probe_builder*> >::_M_realloc_insert<derived_probe_builder* const&>
0.47% stap [kernel.vmlinux] [k] get_page_from_freelist
0.47% stap [kernel.vmlinux] [k] swapgs_restore_regs_and_return_to_usermode
0.47% stap [kernel.vmlinux] [k] do_user_addr_fault
0.46% stap [kernel.vmlinux] [k] __pagevec_lru_add_fn
0.46% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::_M_emplace_unique<std::pair<match_key, match_node*> >
0.42% stap libstdc++.so.6.0.26 [.] 0x00000000000c18fa
0.40% stap [kernel.vmlinux] [k] interrupt_entry
0.40% stap [kernel.vmlinux] [k] update_load_avg
0.40% stap [kernel.vmlinux] [k] __intel_pmu_disable_all
0.40% stap [kernel.vmlinux] [k] clear_page_erms
0.39% stap [kernel.vmlinux] [k] __mod_node_page_state
0.39% stap [kernel.vmlinux] [k] error_entry
0.39% stap [kernel.vmlinux] [k] sync_regs
0.38% stap [kernel.vmlinux] [k] __handle_mm_fault
0.38% stap stap [.] derive_probes
#
# (Tip: System-wide collection from all CPUs: perf record -a)
#
[root@quaco testsuite]#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: William Cohen <wcohen@redhat.com>
Link: https://lkml.kernel.org/n/tip-408hvumcnyn93a0auihnawew@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-15 21:18:58 +00:00
|
|
|
if (evswitch__discard(&rep->evswitch, evsel))
|
|
|
|
return 0;
|
|
|
|
|
2016-03-22 21:39:09 +00:00
|
|
|
if (machine__resolve(machine, &al, sample) < 0) {
|
2013-12-20 05:11:12 +00:00
|
|
|
pr_debug("problem processing %d event, skipping it.\n",
|
|
|
|
event->header.type);
|
2009-06-03 21:14:49 +00:00
|
|
|
return -1;
|
|
|
|
}
|
2009-05-27 18:20:24 +00:00
|
|
|
|
2020-03-19 20:25:13 +00:00
|
|
|
if (rep->stitch_lbr)
|
|
|
|
al.thread->lbr_stitch_enable = true;
|
|
|
|
|
2015-11-26 07:08:20 +00:00
|
|
|
if (symbol_conf.hide_unresolved && al.sym == NULL)
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-06 23:43:22 +00:00
|
|
|
goto out_put;
|
2009-06-30 22:01:22 +00:00
|
|
|
|
2011-11-17 14:19:04 +00:00
|
|
|
if (rep->cpu_list && !test_bit(sample->cpu, rep->cpu_bitmap))
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-06 23:43:22 +00:00
|
|
|
goto out_put;
|
2011-07-04 11:57:50 +00:00
|
|
|
|
2015-09-25 13:15:42 +00:00
|
|
|
if (sort__mode == SORT_MODE__BRANCH) {
|
|
|
|
/*
|
|
|
|
* A non-synthesized event might not have a branch stack if
|
|
|
|
* branch stacks have been synthesized (using itrace options).
|
|
|
|
*/
|
|
|
|
if (!sample->branch_stack)
|
|
|
|
goto out_put;
|
2017-07-18 12:13:14 +00:00
|
|
|
|
|
|
|
iter.add_entry_cb = hist_iter__branch_callback;
|
2013-10-30 00:40:34 +00:00
|
|
|
iter.ops = &hist_iter_branch;
|
2015-09-25 13:15:42 +00:00
|
|
|
} else if (rep->mem_mode) {
|
2013-10-30 00:40:34 +00:00
|
|
|
iter.ops = &hist_iter_mem;
|
2015-09-25 13:15:42 +00:00
|
|
|
} else if (symbol_conf.cumulate_callchain) {
|
2012-09-11 05:13:04 +00:00
|
|
|
iter.ops = &hist_iter_cumulative;
|
2015-09-25 13:15:42 +00:00
|
|
|
} else {
|
2013-10-30 00:40:34 +00:00
|
|
|
iter.ops = &hist_iter_normal;
|
2015-09-25 13:15:42 +00:00
|
|
|
}
|
2013-10-30 00:40:34 +00:00
|
|
|
|
|
|
|
if (al.map != NULL)
|
|
|
|
al.map->dso->hit = 1;
|
|
|
|
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
if (ui__has_annotation() || rep->symbol_ipc || rep->total_cycles_mode) {
|
2019-03-15 21:16:17 +00:00
|
|
|
hist__account_cycles(sample->branch_stack, &al, sample,
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
rep->nonany_branch_mode,
|
|
|
|
&rep->total_cycles);
|
2019-03-15 21:16:17 +00:00
|
|
|
}
|
|
|
|
|
2015-05-19 08:04:10 +00:00
|
|
|
ret = hist_entry_iter__add(&iter, &al, rep->max_stack, rep);
|
2013-10-30 00:40:34 +00:00
|
|
|
if (ret < 0)
|
|
|
|
pr_debug("problem adding hist entry, skipping event\n");
|
perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.
That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.
So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.
I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".
The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-06 23:43:22 +00:00
|
|
|
out_put:
|
|
|
|
addr_location__put(&al);
|
2013-05-14 02:09:02 +00:00
|
|
|
return ret;
|
2009-06-03 21:14:49 +00:00
|
|
|
}
|
2009-06-03 07:38:58 +00:00
|
|
|
|
2011-11-28 10:30:20 +00:00
|
|
|
static int process_read_event(struct perf_tool *tool,
|
2011-11-25 10:19:45 +00:00
|
|
|
union perf_event *event,
|
2012-09-10 22:15:03 +00:00
|
|
|
struct perf_sample *sample __maybe_unused,
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *evsel,
|
2012-09-10 22:15:03 +00:00
|
|
|
struct machine *machine __maybe_unused)
|
2009-06-24 20:46:04 +00:00
|
|
|
{
|
2013-12-19 17:53:53 +00:00
|
|
|
struct report *rep = container_of(tool, struct report, tool);
|
2011-11-28 09:56:39 +00:00
|
|
|
|
2011-11-17 14:19:04 +00:00
|
|
|
if (rep->show_threads) {
|
2020-04-29 19:07:09 +00:00
|
|
|
const char *name = evsel__name(evsel);
|
2016-10-14 21:23:11 +00:00
|
|
|
int err = perf_read_values_add_value(&rep->show_threads_values,
|
2009-08-07 11:55:24 +00:00
|
|
|
event->read.pid, event->read.tid,
|
2021-07-06 15:16:59 +00:00
|
|
|
evsel->core.idx,
|
2009-08-07 11:55:24 +00:00
|
|
|
name,
|
|
|
|
event->read.value);
|
2016-10-14 21:23:11 +00:00
|
|
|
|
|
|
|
if (err)
|
|
|
|
return err;
|
2009-08-07 11:55:24 +00:00
|
|
|
}
|
|
|
|
|
2009-06-24 20:46:04 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-06-11 19:48:41 +00:00
|
|
|
/* For pipe mode, sample_type is not currently set */
|
2013-12-19 17:53:53 +00:00
|
|
|
static int report__setup_sample_type(struct report *rep)
|
2009-06-03 21:14:49 +00:00
|
|
|
{
|
2013-10-22 22:01:31 +00:00
|
|
|
struct perf_session *session = rep->session;
|
2020-06-17 12:24:21 +00:00
|
|
|
u64 sample_type = evlist__combined_sample_type(session->evlist);
|
2017-01-23 21:07:59 +00:00
|
|
|
bool is_pipe = perf_data__is_pipe(session->data);
|
2022-04-14 12:32:01 +00:00
|
|
|
struct evsel *evsel;
|
2011-11-25 10:19:45 +00:00
|
|
|
|
2015-09-25 13:15:33 +00:00
|
|
|
if (session->itrace_synth_opts->callchain ||
|
2020-04-01 10:16:05 +00:00
|
|
|
session->itrace_synth_opts->add_callchain ||
|
2015-09-25 13:15:33 +00:00
|
|
|
(!is_pipe &&
|
|
|
|
perf_header__has_feat(&session->header, HEADER_AUXTRACE) &&
|
|
|
|
!session->itrace_synth_opts->set))
|
|
|
|
sample_type |= PERF_SAMPLE_CALLCHAIN;
|
|
|
|
|
2020-04-29 15:07:46 +00:00
|
|
|
if (session->itrace_synth_opts->last_branch ||
|
|
|
|
session->itrace_synth_opts->add_last_branch)
|
2015-09-25 13:15:40 +00:00
|
|
|
sample_type |= PERF_SAMPLE_BRANCH_STACK;
|
|
|
|
|
2013-10-15 14:27:34 +00:00
|
|
|
if (!is_pipe && !(sample_type & PERF_SAMPLE_CALLCHAIN)) {
|
2016-05-03 11:54:43 +00:00
|
|
|
if (perf_hpp_list.parent) {
|
2012-05-29 04:22:57 +00:00
|
|
|
ui__error("Selected --sort parent, but no "
|
2011-08-03 15:33:24 +00:00
|
|
|
"callchain data. Did you call "
|
|
|
|
"'perf record' without -g?\n");
|
2009-12-27 23:37:02 +00:00
|
|
|
return -EINVAL;
|
2009-07-05 05:39:17 +00:00
|
|
|
}
|
perf report: Make --branch-history work without callgraphs(-g) option in perf record
perf record -b -g <command>
perf report --branch-history
This merges the LBRs with the callgraphs.
However it would be nice if it also works without callgraphs (-g) set in
perf record, so that only the LBRs are displayed. But currently perf
report errors in this case. For example,
perf record -b <command>
perf report --branch-history
Error:
Selected -g or --branch-history but no callchain data. Did
you call 'perf record' without -g?
This patch displays the LBRs only even if callgraphs(-g) is not enabled
in perf record.
Change log:
v2: According to Milian Wolff's comment, change the obsolete error
message. Now the error message is:
┌─Error:─────────────────────────────────────┐
│Selected -g or --branch-history. │
│But no callchain or branch data. │
│Did you call 'perf record' without -g or -b?│
│ │
│ │
│Press any key... │
└────────────────────────────────────────────┘
When passing the last parameter to hists__fprintf,
changes "|" to "||".
hists__fprintf(hists, !quiet, 0, 0, rep->min_percent, stdout,
symbol_conf.use_callchain || symbol_conf.show_branchflag_count);
Signed-off-by: Yao Jin <yao.jin@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1494240182-28899-1-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-05-08 10:43:02 +00:00
|
|
|
if (symbol_conf.use_callchain &&
|
|
|
|
!symbol_conf.show_branchflag_count) {
|
|
|
|
ui__error("Selected -g or --branch-history.\n"
|
|
|
|
"But no callchain or branch data.\n"
|
|
|
|
"Did you call 'perf record' without -g or -b?\n");
|
2009-10-07 10:47:31 +00:00
|
|
|
return -1;
|
2009-07-05 05:39:17 +00:00
|
|
|
}
|
2016-04-18 14:54:31 +00:00
|
|
|
} else if (!callchain_param.enabled &&
|
2011-11-17 14:19:04 +00:00
|
|
|
callchain_param.mode != CHAIN_NONE &&
|
2010-01-05 13:54:45 +00:00
|
|
|
!symbol_conf.use_callchain) {
|
2009-12-15 22:04:42 +00:00
|
|
|
symbol_conf.use_callchain = true;
|
2011-01-14 03:52:00 +00:00
|
|
|
if (callchain_register_param(&callchain_param) < 0) {
|
2012-05-29 04:22:57 +00:00
|
|
|
ui__error("Can't register callchain params.\n");
|
2009-12-27 23:37:02 +00:00
|
|
|
return -EINVAL;
|
2009-08-08 00:16:24 +00:00
|
|
|
}
|
2009-06-18 21:22:55 +00:00
|
|
|
}
|
|
|
|
|
2013-10-30 08:05:55 +00:00
|
|
|
if (symbol_conf.cumulate_callchain) {
|
|
|
|
/* Silently ignore if callchain is missing */
|
|
|
|
if (!(sample_type & PERF_SAMPLE_CALLCHAIN)) {
|
|
|
|
symbol_conf.cumulate_callchain = false;
|
|
|
|
perf_hpp__cancel_cumulate();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-04-01 11:35:20 +00:00
|
|
|
if (sort__mode == SORT_MODE__BRANCH) {
|
2013-10-15 14:27:34 +00:00
|
|
|
if (!is_pipe &&
|
2012-08-01 22:15:52 +00:00
|
|
|
!(sample_type & PERF_SAMPLE_BRANCH_STACK)) {
|
2012-05-29 04:22:57 +00:00
|
|
|
ui__error("Selected -b but no branch data. "
|
|
|
|
"Did you call perf record without -b?\n");
|
perf report: Add support for taken branch sampling
This patch adds support for taken branch sampling, i.e, the
PERF_SAMPLE_BRANCH_STACK feature to perf report. In other
words, to display histograms based on taken branches rather
than executed instructions addresses.
The new option is called -b and it takes no argument. To
generate meaningful output, the perf.data must have been
obtained using perf record -b xxx ... where xxx is a branch
filter option.
The output shows symbols, modules, sorted by 'who branches
where' the most often. The percentages reported in the first
column refer to the total number of branches captured and
not the usual number of samples.
Here is a quick example.
Here branchy is simple test program which looks as follows:
void f2(void)
{}
void f3(void)
{}
void f1(unsigned long n)
{
if (n & 1UL)
f2();
else
f3();
}
int main(void)
{
unsigned long i;
for (i=0; i < N; i++)
f1(i);
return 0;
}
Here is the output captured on Nehalem, if we are
only interested in user level function calls.
$ perf record -b any_call,u -e cycles:u branchy
$ perf report -b --sort=symbol
52.34% [.] main [.] f1
24.04% [.] f1 [.] f3
23.60% [.] f1 [.] f2
0.01% [k] _IO_new_file_xsputn [k] _IO_file_overflow
0.01% [k] _IO_vfprintf_internal [k] _IO_new_file_xsputn
0.01% [k] _IO_vfprintf_internal [k] strchrnul
0.01% [k] __printf [k] _IO_vfprintf_internal
0.01% [k] main [k] __printf
About half (52%) of the call branches captured are from main()
-> f1(). The second half (24%+23%) is split in two equal shares
between f1() -> f2(), f1() ->f3(). The output is as expected
given the code.
It should be noted, that using -b in perf record does not
eliminate information in the perf.data file. Consequently, a
typical profile can also be obtained by perf report by simply
not using its -b option.
It is possible to sort on branch related columns:
- dso_from, symbol_from
- dso_to, symbol_to
- mispredict
Signed-off-by: Roberto Agostino Vitillo <ravitillo@lbl.gov>
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: peterz@infradead.org
Cc: acme@redhat.com
Cc: robert.richter@amd.com
Cc: ming.m.lin@intel.com
Cc: andi@firstfloor.org
Cc: asharma@fb.com
Cc: vweaver1@eecs.utk.edu
Cc: khandual@linux.vnet.ibm.com
Cc: dsahern@gmail.com
Link: http://lkml.kernel.org/r/1328826068-11713-14-git-send-email-eranian@google.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-02-09 22:21:03 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-14 13:22:13 +00:00
|
|
|
if (sort__mode == SORT_MODE__MEMORY) {
|
2022-04-14 12:32:01 +00:00
|
|
|
/*
|
|
|
|
* FIXUP: prior to kernel 5.18, Arm SPE missed to set
|
|
|
|
* PERF_SAMPLE_DATA_SRC bit in sample type. For backward
|
|
|
|
* compatibility, set the bit if it's an old perf data file.
|
|
|
|
*/
|
|
|
|
evlist__for_each_entry(session->evlist, evsel) {
|
|
|
|
if (strstr(evsel->name, "arm_spe") &&
|
|
|
|
!(sample_type & PERF_SAMPLE_DATA_SRC)) {
|
|
|
|
evsel->core.attr.sample_type |= PERF_SAMPLE_DATA_SRC;
|
|
|
|
sample_type |= PERF_SAMPLE_DATA_SRC;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-14 13:22:13 +00:00
|
|
|
if (!is_pipe && !(sample_type & PERF_SAMPLE_DATA_SRC)) {
|
|
|
|
ui__error("Selected --mem-mode but no mem data. "
|
|
|
|
"Did you call perf record without -d?\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-12-17 15:45:18 +00:00
|
|
|
callchain_param_setup(sample_type, perf_env__arch(&rep->session->header.env));
|
2015-07-18 15:24:47 +00:00
|
|
|
|
2020-03-19 20:25:13 +00:00
|
|
|
if (rep->stitch_lbr && (callchain_param.record_mode != CALLCHAIN_LBR)) {
|
|
|
|
ui__warning("Can't find LBR callchain. Switch off --stitch-lbr.\n"
|
|
|
|
"Please apply --call-graph lbr when recording.\n");
|
|
|
|
rep->stitch_lbr = false;
|
|
|
|
}
|
|
|
|
|
2015-07-18 15:24:47 +00:00
|
|
|
/* ??? handle more cases than just ANY? */
|
2020-06-17 12:31:25 +00:00
|
|
|
if (!(evlist__combined_branch_type(session->evlist) & PERF_SAMPLE_BRANCH_ANY))
|
2015-07-18 15:24:47 +00:00
|
|
|
rep->nonany_branch_mode = true;
|
|
|
|
|
2020-01-07 19:17:45 +00:00
|
|
|
#if !defined(HAVE_LIBUNWIND_SUPPORT) && !defined(HAVE_DWARF_SUPPORT)
|
2019-10-11 02:21:22 +00:00
|
|
|
if (dwarf_callchain_users) {
|
2020-01-07 19:17:45 +00:00
|
|
|
ui__warning("Please install libunwind or libdw "
|
|
|
|
"development packages during the perf build.\n");
|
2019-10-11 02:21:22 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2009-10-07 10:47:31 +00:00
|
|
|
return 0;
|
|
|
|
}
|
2009-05-26 18:51:47 +00:00
|
|
|
|
2012-09-10 22:15:03 +00:00
|
|
|
static void sig_handler(int sig __maybe_unused)
|
2010-04-02 04:59:17 +00:00
|
|
|
{
|
|
|
|
session_done = 1;
|
|
|
|
}
|
|
|
|
|
2013-12-19 17:53:53 +00:00
|
|
|
static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report *rep,
|
2010-05-14 17:19:35 +00:00
|
|
|
const char *evname, FILE *fp)
|
|
|
|
{
|
|
|
|
size_t ret;
|
|
|
|
char unit;
|
2021-04-27 01:37:13 +00:00
|
|
|
unsigned long nr_samples = hists->stats.nr_samples;
|
2013-10-22 22:01:31 +00:00
|
|
|
u64 nr_events = hists->stats.total_period;
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *evsel = hists_to_evsel(hists);
|
2013-01-22 09:09:44 +00:00
|
|
|
char buf[512];
|
|
|
|
size_t size = sizeof(buf);
|
2015-09-04 14:45:45 +00:00
|
|
|
int socked_id = hists->socket_filter;
|
2013-01-22 09:09:44 +00:00
|
|
|
|
2017-02-17 08:17:39 +00:00
|
|
|
if (quiet)
|
|
|
|
return 0;
|
|
|
|
|
2014-01-14 02:52:48 +00:00
|
|
|
if (symbol_conf.filter_relative) {
|
|
|
|
nr_samples = hists->stats.nr_non_filtered_samples;
|
|
|
|
nr_events = hists->stats.total_non_filtered_period;
|
|
|
|
}
|
|
|
|
|
2020-04-30 13:51:16 +00:00
|
|
|
if (evsel__is_group_event(evsel)) {
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *pos;
|
2013-01-22 09:09:44 +00:00
|
|
|
|
2020-04-29 19:09:12 +00:00
|
|
|
evsel__group_desc(evsel, buf, size);
|
2013-01-22 09:09:44 +00:00
|
|
|
evname = buf;
|
|
|
|
|
|
|
|
for_each_group_member(pos, evsel) {
|
2014-10-09 16:13:41 +00:00
|
|
|
const struct hists *pos_hists = evsel__hists(pos);
|
|
|
|
|
2014-01-14 02:52:48 +00:00
|
|
|
if (symbol_conf.filter_relative) {
|
2014-10-09 16:13:41 +00:00
|
|
|
nr_samples += pos_hists->stats.nr_non_filtered_samples;
|
|
|
|
nr_events += pos_hists->stats.total_non_filtered_period;
|
2014-01-14 02:52:48 +00:00
|
|
|
} else {
|
2021-04-27 01:37:13 +00:00
|
|
|
nr_samples += pos_hists->stats.nr_samples;
|
2014-10-09 16:13:41 +00:00
|
|
|
nr_events += pos_hists->stats.total_period;
|
2014-01-14 02:52:48 +00:00
|
|
|
}
|
2013-01-22 09:09:44 +00:00
|
|
|
}
|
|
|
|
}
|
2010-05-14 17:19:35 +00:00
|
|
|
|
2012-04-06 02:01:01 +00:00
|
|
|
nr_samples = convert_unit(nr_samples, &unit);
|
|
|
|
ret = fprintf(fp, "# Samples: %lu%c", nr_samples, unit);
|
2018-03-07 15:50:02 +00:00
|
|
|
if (evname != NULL) {
|
|
|
|
ret += fprintf(fp, " of event%s '%s'",
|
2019-07-21 11:24:46 +00:00
|
|
|
evsel->core.nr_members > 1 ? "s" : "", evname);
|
2018-03-07 15:50:02 +00:00
|
|
|
}
|
2012-04-06 02:01:01 +00:00
|
|
|
|
2018-01-10 15:00:30 +00:00
|
|
|
if (rep->time_str)
|
|
|
|
ret += fprintf(fp, " (time slices: %s)", rep->time_str);
|
|
|
|
|
2020-06-08 16:18:17 +00:00
|
|
|
if (symbol_conf.show_ref_callgraph && evname && strstr(evname, "call-graph=no")) {
|
2015-08-11 10:30:49 +00:00
|
|
|
ret += fprintf(fp, ", show reference callgraph");
|
|
|
|
}
|
|
|
|
|
2013-01-24 15:10:36 +00:00
|
|
|
if (rep->mem_mode) {
|
|
|
|
ret += fprintf(fp, "\n# Total weight : %" PRIu64, nr_events);
|
perf tools: Remove (null) value of "Sort order" for perf mem report
When '--sort' is not set, 'perf mem report" will print a null pointer as
the output value of sort order, so fix it.
Example:
Before this patch:
$ perf mem report
# To display the perf.data header info, please use --header/--header-only options.
#
# Samples: 18 of event 'cpu/mem-loads/pp'
# Total weight : 188
# Sort order : (null)
#
...
After this patch:
$ perf mem report
# To display the perf.data header info, please use --header/--header-only options.
#
# Samples: 18 of event 'cpu/mem-loads/pp'
# Total weight : 188
# Sort order : local_weight,mem,sym,dso,symbol_daddr,dso_daddr,snoop,tlb,locked
#
...
Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427082605-12881-1-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-23 03:50:05 +00:00
|
|
|
ret += fprintf(fp, "\n# Sort order : %s", sort_order ? : default_mem_sort_order);
|
2013-01-24 15:10:36 +00:00
|
|
|
} else
|
|
|
|
ret += fprintf(fp, "\n# Event count (approx.): %" PRIu64, nr_events);
|
2015-09-04 14:45:44 +00:00
|
|
|
|
2015-09-04 14:45:45 +00:00
|
|
|
if (socked_id > -1)
|
|
|
|
ret += fprintf(fp, "\n# Processor Socket: %d", socked_id);
|
2015-09-04 14:45:44 +00:00
|
|
|
|
2010-05-14 17:19:35 +00:00
|
|
|
return ret + fprintf(fp, "\n#\n");
|
|
|
|
}
|
|
|
|
|
2020-11-30 17:23:35 +00:00
|
|
|
static int evlist__tui_block_hists_browse(struct evlist *evlist, struct report *rep)
|
2019-11-07 07:47:19 +00:00
|
|
|
{
|
|
|
|
struct evsel *pos;
|
|
|
|
int i = 0, ret;
|
|
|
|
|
|
|
|
evlist__for_each_entry(evlist, pos) {
|
|
|
|
ret = report__browse_block_hists(&rep->block_reports[i++].hist,
|
2019-11-18 14:08:49 +00:00
|
|
|
rep->min_percent, pos,
|
|
|
|
&rep->session->header.env,
|
|
|
|
&rep->annotation_opts);
|
2019-11-07 07:47:19 +00:00
|
|
|
if (ret != 0)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-11-30 17:23:35 +00:00
|
|
|
static int evlist__tty_browse_hists(struct evlist *evlist, struct report *rep, const char *help)
|
2010-05-24 01:36:51 +00:00
|
|
|
{
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *pos;
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
int i = 0;
|
2010-05-24 01:36:51 +00:00
|
|
|
|
2017-02-17 08:17:39 +00:00
|
|
|
if (!quiet) {
|
|
|
|
fprintf(stdout, "#\n# Total Lost Samples: %" PRIu64 "\n#\n",
|
|
|
|
evlist->stats.total_lost_samples);
|
|
|
|
}
|
|
|
|
|
2016-06-23 14:26:15 +00:00
|
|
|
evlist__for_each_entry(evlist, pos) {
|
2014-10-09 16:13:41 +00:00
|
|
|
struct hists *hists = evsel__hists(pos);
|
2020-04-29 19:07:09 +00:00
|
|
|
const char *evname = evsel__name(pos);
|
2010-05-24 01:36:51 +00:00
|
|
|
|
2020-04-30 13:51:16 +00:00
|
|
|
if (symbol_conf.event_group && !evsel__is_group_leader(pos))
|
2013-01-22 09:09:43 +00:00
|
|
|
continue;
|
|
|
|
|
2021-04-27 01:37:15 +00:00
|
|
|
if (rep->skip_empty && !hists->stats.nr_samples)
|
|
|
|
continue;
|
|
|
|
|
2013-12-19 17:53:53 +00:00
|
|
|
hists__fprintf_nr_sample_events(hists, rep, evname, stdout);
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
|
|
|
|
if (rep->total_cycles_mode) {
|
|
|
|
report__browse_block_hists(&rep->block_reports[i++].hist,
|
2019-11-18 14:08:49 +00:00
|
|
|
rep->min_percent, pos,
|
|
|
|
NULL, NULL);
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2017-02-17 08:17:39 +00:00
|
|
|
hists__fprintf(hists, !quiet, 0, 0, rep->min_percent, stdout,
|
2018-06-20 18:58:20 +00:00
|
|
|
!(symbol_conf.use_callchain ||
|
|
|
|
symbol_conf.show_branchflag_count));
|
2010-05-24 01:36:51 +00:00
|
|
|
fprintf(stdout, "\n\n");
|
|
|
|
}
|
|
|
|
|
2017-03-07 15:08:29 +00:00
|
|
|
if (!quiet)
|
2010-05-24 01:36:51 +00:00
|
|
|
fprintf(stdout, "#\n# (%s)\n#\n", help);
|
|
|
|
|
2015-05-11 13:44:39 +00:00
|
|
|
if (rep->show_threads) {
|
|
|
|
bool style = !strcmp(rep->pretty_printing_style, "raw");
|
|
|
|
perf_read_values_display(stdout, &rep->show_threads_values,
|
|
|
|
style);
|
|
|
|
perf_read_values_destroy(&rep->show_threads_values);
|
2010-05-24 01:36:51 +00:00
|
|
|
}
|
|
|
|
|
2017-07-18 12:13:14 +00:00
|
|
|
if (sort__mode == SORT_MODE__BRANCH)
|
|
|
|
branch_type_stat_display(stdout, &rep->brtype_stat);
|
|
|
|
|
2010-05-24 01:36:51 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-01-08 13:10:00 +00:00
|
|
|
static void report__warn_kptr_restrict(const struct report *rep)
|
|
|
|
{
|
2015-09-30 14:54:04 +00:00
|
|
|
struct map *kernel_map = machine__kernel_map(&rep->session->machines.host);
|
2015-04-08 10:59:32 +00:00
|
|
|
struct kmap *kernel_kmap = kernel_map ? map__kmap(kernel_map) : NULL;
|
2014-01-08 13:10:00 +00:00
|
|
|
|
2020-11-30 18:07:49 +00:00
|
|
|
if (evlist__exclude_kernel(rep->session->evlist))
|
2017-11-14 14:12:11 +00:00
|
|
|
return;
|
|
|
|
|
2014-01-08 13:10:00 +00:00
|
|
|
if (kernel_map == NULL ||
|
|
|
|
(kernel_map->dso->hit &&
|
|
|
|
(kernel_kmap->ref_reloc_sym == NULL ||
|
|
|
|
kernel_kmap->ref_reloc_sym->addr == 0))) {
|
|
|
|
const char *desc =
|
|
|
|
"As no suitable kallsyms nor vmlinux was found, kernel samples\n"
|
|
|
|
"can't be resolved.";
|
|
|
|
|
2018-04-23 20:13:49 +00:00
|
|
|
if (kernel_map && map__has_symbols(kernel_map)) {
|
|
|
|
desc = "If some relocation was applied (e.g. "
|
|
|
|
"kexec) symbols may be misresolved.";
|
2014-01-08 13:10:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ui__warning(
|
|
|
|
"Kernel address maps (/proc/{kallsyms,modules}) were restricted.\n\n"
|
|
|
|
"Check /proc/sys/kernel/kptr_restrict before running 'perf record'.\n\n%s\n\n"
|
|
|
|
"Samples in kernel modules can't be resolved as well.\n\n",
|
|
|
|
desc);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-01-08 15:22:07 +00:00
|
|
|
static int report__gtk_browse_hists(struct report *rep, const char *help)
|
|
|
|
{
|
2019-07-21 11:23:52 +00:00
|
|
|
int (*hist_browser)(struct evlist *evlist, const char *help,
|
2014-01-08 15:22:07 +00:00
|
|
|
struct hist_browser_timer *timer, float min_pcnt);
|
|
|
|
|
2020-11-30 17:23:35 +00:00
|
|
|
hist_browser = dlsym(perf_gtk_handle, "evlist__gtk_browse_hists");
|
2014-01-08 15:22:07 +00:00
|
|
|
|
|
|
|
if (hist_browser == NULL) {
|
|
|
|
ui__error("GTK browser not found!\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return hist_browser(rep->session->evlist, help, NULL, rep->min_percent);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int report__browse_hists(struct report *rep)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct perf_session *session = rep->session;
|
2019-07-21 11:23:52 +00:00
|
|
|
struct evlist *evlist = session->evlist;
|
2021-11-18 07:38:04 +00:00
|
|
|
char *help = NULL, *path = NULL;
|
2016-01-09 10:16:29 +00:00
|
|
|
|
2021-11-18 07:38:04 +00:00
|
|
|
path = system_path(TIPDIR);
|
|
|
|
if (perf_tip(&help, path) || help == NULL) {
|
2016-01-09 10:16:29 +00:00
|
|
|
/* fallback for people who don't install perf ;-) */
|
2021-11-18 07:38:04 +00:00
|
|
|
free(path);
|
|
|
|
path = system_path(DOCDIR);
|
|
|
|
if (perf_tip(&help, path) || help == NULL)
|
|
|
|
help = strdup("Cannot load tips.txt file, please install perf!");
|
2016-01-09 10:16:29 +00:00
|
|
|
}
|
2021-11-18 07:38:04 +00:00
|
|
|
free(path);
|
2014-01-08 15:22:07 +00:00
|
|
|
|
|
|
|
switch (use_browser) {
|
|
|
|
case 1:
|
2019-11-07 07:47:19 +00:00
|
|
|
if (rep->total_cycles_mode) {
|
2020-11-30 17:23:35 +00:00
|
|
|
ret = evlist__tui_block_hists_browse(evlist, rep);
|
2019-11-07 07:47:19 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2020-11-30 17:23:35 +00:00
|
|
|
ret = evlist__tui_browse_hists(evlist, help, NULL, rep->min_percent,
|
|
|
|
&session->header.env, true, &rep->annotation_opts);
|
2014-01-08 15:22:07 +00:00
|
|
|
/*
|
|
|
|
* Usually "ret" is the last pressed key, and we only
|
|
|
|
* care if the key notifies us to switch data file.
|
|
|
|
*/
|
2020-02-20 01:36:15 +00:00
|
|
|
if (ret != K_SWITCH_INPUT_DATA && ret != K_RELOAD)
|
2014-01-08 15:22:07 +00:00
|
|
|
ret = 0;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
ret = report__gtk_browse_hists(rep, help);
|
|
|
|
break;
|
|
|
|
default:
|
2020-11-30 17:23:35 +00:00
|
|
|
ret = evlist__tty_browse_hists(evlist, rep, help);
|
2014-01-08 15:22:07 +00:00
|
|
|
break;
|
|
|
|
}
|
2021-11-18 07:38:04 +00:00
|
|
|
free(help);
|
2014-01-08 15:22:07 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-02-16 14:08:26 +00:00
|
|
|
static int report__collapse_hists(struct report *rep)
|
2014-01-08 17:45:24 +00:00
|
|
|
{
|
|
|
|
struct ui_progress prog;
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *pos;
|
2016-02-16 14:08:26 +00:00
|
|
|
int ret = 0;
|
2014-01-08 17:45:24 +00:00
|
|
|
|
2014-04-22 00:47:25 +00:00
|
|
|
ui_progress__init(&prog, rep->nr_entries, "Merging related events...");
|
2014-01-08 17:45:24 +00:00
|
|
|
|
2016-06-23 14:26:15 +00:00
|
|
|
evlist__for_each_entry(rep->session->evlist, pos) {
|
2014-10-09 16:13:41 +00:00
|
|
|
struct hists *hists = evsel__hists(pos);
|
2014-01-08 17:45:24 +00:00
|
|
|
|
2021-07-06 15:16:59 +00:00
|
|
|
if (pos->core.idx == 0)
|
2014-01-08 17:45:24 +00:00
|
|
|
hists->symbol_filter_str = rep->symbol_filter_str;
|
|
|
|
|
2015-09-04 14:45:44 +00:00
|
|
|
hists->socket_filter = rep->socket_filter;
|
|
|
|
|
2016-02-16 14:08:26 +00:00
|
|
|
ret = hists__collapse_resort(hists, &prog);
|
|
|
|
if (ret < 0)
|
|
|
|
break;
|
2014-01-08 17:45:24 +00:00
|
|
|
|
|
|
|
/* Non-group events are considered as leader */
|
2020-04-30 13:51:16 +00:00
|
|
|
if (symbol_conf.event_group && !evsel__is_group_leader(pos)) {
|
2021-07-06 15:17:00 +00:00
|
|
|
struct hists *leader_hists = evsel__hists(evsel__leader(pos));
|
2014-01-08 17:45:24 +00:00
|
|
|
|
|
|
|
hists__match(leader_hists, hists);
|
|
|
|
hists__link(leader_hists, hists);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ui_progress__finish();
|
2016-02-16 14:08:26 +00:00
|
|
|
return ret;
|
2014-01-08 17:45:24 +00:00
|
|
|
}
|
|
|
|
|
2019-02-04 14:18:08 +00:00
|
|
|
static int hists__resort_cb(struct hist_entry *he, void *arg)
|
|
|
|
{
|
|
|
|
struct report *rep = arg;
|
|
|
|
struct symbol *sym = he->ms.sym;
|
|
|
|
|
|
|
|
if (rep->symbol_ipc && sym && !sym->annotate2) {
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *evsel = hists_to_evsel(he->hists);
|
2019-02-04 14:18:08 +00:00
|
|
|
|
2019-11-04 14:10:00 +00:00
|
|
|
symbol__annotate2(&he->ms, evsel,
|
2019-02-04 14:18:08 +00:00
|
|
|
&annotation__default_options, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-12-22 04:44:10 +00:00
|
|
|
static void report__output_resort(struct report *rep)
|
|
|
|
{
|
|
|
|
struct ui_progress prog;
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *pos;
|
2014-12-22 04:44:10 +00:00
|
|
|
|
|
|
|
ui_progress__init(&prog, rep->nr_entries, "Sorting events for output...");
|
|
|
|
|
2019-02-04 14:18:08 +00:00
|
|
|
evlist__for_each_entry(rep->session->evlist, pos) {
|
2020-05-06 15:58:55 +00:00
|
|
|
evsel__output_resort_cb(pos, &prog, hists__resort_cb, rep);
|
2019-02-04 14:18:08 +00:00
|
|
|
}
|
2014-12-22 04:44:10 +00:00
|
|
|
|
|
|
|
ui_progress__finish();
|
|
|
|
}
|
|
|
|
|
2021-04-27 01:37:14 +00:00
|
|
|
static int count_sample_event(struct perf_tool *tool __maybe_unused,
|
|
|
|
union perf_event *event __maybe_unused,
|
|
|
|
struct perf_sample *sample __maybe_unused,
|
|
|
|
struct evsel *evsel,
|
|
|
|
struct machine *machine __maybe_unused)
|
|
|
|
{
|
|
|
|
struct hists *hists = evsel__hists(evsel);
|
|
|
|
|
|
|
|
hists__inc_nr_events(hists);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-09-01 19:57:39 +00:00
|
|
|
static int count_lost_samples_event(struct perf_tool *tool,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct machine *machine __maybe_unused)
|
|
|
|
{
|
|
|
|
struct report *rep = container_of(tool, struct report, tool);
|
|
|
|
struct evsel *evsel;
|
|
|
|
|
|
|
|
evsel = evlist__id2evsel(rep->session->evlist, sample->id);
|
|
|
|
if (evsel) {
|
|
|
|
hists__inc_nr_lost_samples(evsel__hists(evsel),
|
|
|
|
event->lost_samples.lost);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-06-30 04:30:58 +00:00
|
|
|
static int process_attr(struct perf_tool *tool __maybe_unused,
|
|
|
|
union perf_event *event,
|
|
|
|
struct evlist **pevlist);
|
|
|
|
|
2018-01-07 16:03:55 +00:00
|
|
|
static void stats_setup(struct report *rep)
|
|
|
|
{
|
|
|
|
memset(&rep->tool, 0, sizeof(rep->tool));
|
2021-06-30 04:30:58 +00:00
|
|
|
rep->tool.attr = process_attr;
|
2021-04-27 01:37:14 +00:00
|
|
|
rep->tool.sample = count_sample_event;
|
2022-09-01 19:57:39 +00:00
|
|
|
rep->tool.lost_samples = count_lost_samples_event;
|
2018-01-07 16:03:55 +00:00
|
|
|
rep->tool.no_warn = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int stats_print(struct report *rep)
|
|
|
|
{
|
|
|
|
struct perf_session *session = rep->session;
|
|
|
|
|
2021-04-27 01:37:15 +00:00
|
|
|
perf_session__fprintf_nr_events(session, stdout, rep->skip_empty);
|
|
|
|
evlist__fprintf_nr_events(session->evlist, stdout, rep->skip_empty);
|
2018-01-07 16:03:55 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-01-07 16:03:56 +00:00
|
|
|
static void tasks_setup(struct report *rep)
|
|
|
|
{
|
|
|
|
memset(&rep->tool, 0, sizeof(rep->tool));
|
perf report: Ask for ordered events for --tasks option
If we have the time in, keep the events in time order.
Committer notes:
Trying to be more verbose, what actual effect this will have in this particular
case?
Before and after this patch shows the artifacts:
--- /tmp/before 2018-02-06 15:40:29.536411625 -0300
+++ /tmp/after 2018-02-06 15:40:51.963403599 -0300
@@ -5,34 +5,34 @@
2540 2540 1818 | gnome-terminal-
3489 3489 2540 | bash
32433 32433 3489 | perf
- 32434 32434 32433 | perf
+ 32434 32434 32433 | make
32441 32441 32434 | make
32514 32514 32441 | make
511 511 32514 | sh
- 512 512 511 | sh
+ 512 512 511 | install
<SNIP>
We don't have 'perf' calling 'perf' calling 'make', etc, the second
'perf' actually is 'make', i.e. there was reordering of the relevant
PERF_RECORD_COMM and PERF_RECORD_FORK records.
Ditto for sh/install later on.
Look for FORK and COMM meta events, for those tids:
# perf report -D | egrep 'PERF_RECORD_(FORK|COMM)' | egrep '3243[34]'
0 14774650990679 0x1a3cd8 [0x38]: PERF_RECORD_FORK(32433:32433):(3489:3489)
1 14774652080381 0x1d6568 [0x30]: PERF_RECORD_COMM exec: perf:32433/32433
1 14774742473340 0x1dbb48 [0x38]: PERF_RECORD_FORK(32434:32434):(32433:32433)
0 14774752005779 0x1a4af8 [0x30]: PERF_RECORD_COMM exec: make:32434/32434
0 14774753997960 0x1a5578 [0x38]: PERF_RECORD_FORK(32435:32435):(32434:32434)
0 14774756070782 0x1a5618 [0x38]: PERF_RECORD_FORK(32438:32438):(32434:32434)
0 14774757772939 0x1a5680 [0x38]: PERF_RECORD_FORK(32440:32440):(32434:32434)
0 14774758230600 0x1a56e8 [0x38]: PERF_RECORD_FORK(32441:32441):(32434:32434)
#
First column is the cpu, second is the timestamp.
So they are on different CPUs, thus ring buffers, and when we don't use
the ordered_events class, we end up mixing that up, use it to take
advantage of the PERF_RECORD_FINISHED_ROUND meta events to go on
ordering the events using the PERF_SAMPLE_TIME present in the
PERF_RECORD_{FORK,COMM,EXIT,SAMPLE,etc} records in the ring buffer.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180206181813.10943-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-02-06 18:17:57 +00:00
|
|
|
rep->tool.ordered_events = true;
|
2018-01-09 18:25:03 +00:00
|
|
|
if (rep->mmaps_mode) {
|
|
|
|
rep->tool.mmap = perf_event__process_mmap;
|
|
|
|
rep->tool.mmap2 = perf_event__process_mmap2;
|
|
|
|
}
|
2021-06-30 04:30:58 +00:00
|
|
|
rep->tool.attr = process_attr;
|
2018-01-07 16:03:56 +00:00
|
|
|
rep->tool.comm = perf_event__process_comm;
|
|
|
|
rep->tool.exit = perf_event__process_exit;
|
|
|
|
rep->tool.fork = perf_event__process_fork;
|
|
|
|
rep->tool.no_warn = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct task {
|
|
|
|
struct thread *thread;
|
|
|
|
struct list_head list;
|
|
|
|
struct list_head children;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct task *tasks_list(struct task *task, struct machine *machine)
|
|
|
|
{
|
|
|
|
struct thread *parent_thread, *thread = task->thread;
|
|
|
|
struct task *parent_task;
|
|
|
|
|
|
|
|
/* Already listed. */
|
|
|
|
if (!list_empty(&task->list))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* Last one in the chain. */
|
|
|
|
if (thread->ppid == -1)
|
|
|
|
return task;
|
|
|
|
|
|
|
|
parent_thread = machine__find_thread(machine, -1, thread->ppid);
|
|
|
|
if (!parent_thread)
|
|
|
|
return ERR_PTR(-ENOENT);
|
|
|
|
|
|
|
|
parent_task = thread__priv(parent_thread);
|
|
|
|
list_add_tail(&task->list, &parent_task->children);
|
|
|
|
return tasks_list(parent_task, machine);
|
|
|
|
}
|
|
|
|
|
2018-01-09 18:25:03 +00:00
|
|
|
static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
|
|
|
|
{
|
|
|
|
size_t printed = 0;
|
2019-10-28 14:31:38 +00:00
|
|
|
struct map *map;
|
2018-01-09 18:25:03 +00:00
|
|
|
|
2019-10-28 14:31:38 +00:00
|
|
|
maps__for_each_entry(maps, map) {
|
2018-01-09 18:25:03 +00:00
|
|
|
printed += fprintf(fp, "%*s %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
|
|
|
|
indent, "", map->start, map->end,
|
|
|
|
map->prot & PROT_READ ? 'r' : '-',
|
|
|
|
map->prot & PROT_WRITE ? 'w' : '-',
|
|
|
|
map->prot & PROT_EXEC ? 'x' : '-',
|
|
|
|
map->flags & MAP_SHARED ? 's' : 'p',
|
|
|
|
map->pgoff,
|
perf dso: Move dso_id from 'struct map' to 'struct dso'
And take it into account when looking up DSOs when we have the dso_id
fields obtained from somewhere, like from PERF_RECORD_MMAP2 records.
Instances of struct map pointing to the same DSO pathname but with
anything in dso_id different are in fact different DSOs, so better have
different 'struct dso' instances to reflect that. At some point we may
want to get copies of the contents of the different objects if we want
to do correct annotation or other analysis.
With this we get 'struct map' 24 bytes leaner:
$ pahole -C map ~/bin/perf
struct map {
union {
struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */
struct list_head node; /* 0 16 */
} __attribute__((__aligned__(8))); /* 0 24 */
u64 start; /* 24 8 */
u64 end; /* 32 8 */
_Bool erange_warned:1; /* 40: 0 1 */
_Bool priv:1; /* 40: 1 1 */
/* XXX 6 bits hole, try to pack */
/* XXX 3 bytes hole, try to pack */
u32 prot; /* 44 4 */
u64 pgoff; /* 48 8 */
u64 reloc; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
u64 (*map_ip)(struct map *, u64); /* 64 8 */
u64 (*unmap_ip)(struct map *, u64); /* 72 8 */
struct dso * dso; /* 80 8 */
refcount_t refcnt; /* 88 4 */
u32 flags; /* 92 4 */
/* size: 96, cachelines: 2, members: 13 */
/* sum members: 92, holes: 1, sum holes: 3 */
/* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */
/* forced alignments: 1 */
/* last cacheline: 32 bytes */
} __attribute__((__aligned__(8)));
$
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-g4hxxmraplo7wfjmk384mfsb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-19 21:44:22 +00:00
|
|
|
map->dso->id.ino, map->dso->name);
|
2018-01-09 18:25:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return printed;
|
|
|
|
}
|
|
|
|
|
2018-01-07 16:03:56 +00:00
|
|
|
static void task__print_level(struct task *task, FILE *fp, int level)
|
|
|
|
{
|
|
|
|
struct thread *thread = task->thread;
|
|
|
|
struct task *child;
|
2018-01-09 18:25:03 +00:00
|
|
|
int comm_indent = fprintf(fp, " %8d %8d %8d |%*s",
|
|
|
|
thread->pid_, thread->tid, thread->ppid,
|
|
|
|
level, "");
|
|
|
|
|
|
|
|
fprintf(fp, "%s\n", thread__comm_str(thread));
|
2018-01-07 16:03:56 +00:00
|
|
|
|
2019-11-26 01:07:43 +00:00
|
|
|
maps__fprintf_task(thread->maps, comm_indent, fp);
|
2018-01-07 16:03:56 +00:00
|
|
|
|
|
|
|
if (!list_empty(&task->children)) {
|
|
|
|
list_for_each_entry(child, &task->children, list)
|
|
|
|
task__print_level(child, fp, level + 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int tasks_print(struct report *rep, FILE *fp)
|
|
|
|
{
|
|
|
|
struct perf_session *session = rep->session;
|
|
|
|
struct machine *machine = &session->machines.host;
|
|
|
|
struct task *tasks, *task;
|
|
|
|
unsigned int nr = 0, itask = 0, i;
|
|
|
|
struct rb_node *nd;
|
|
|
|
LIST_HEAD(list);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* No locking needed while accessing machine->threads,
|
|
|
|
* because --tasks is single threaded command.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Count all the threads. */
|
|
|
|
for (i = 0; i < THREADS__TABLE_SIZE; i++)
|
|
|
|
nr += machine->threads[i].nr;
|
|
|
|
|
|
|
|
tasks = malloc(sizeof(*tasks) * nr);
|
|
|
|
if (!tasks)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
for (i = 0; i < THREADS__TABLE_SIZE; i++) {
|
|
|
|
struct threads *threads = &machine->threads[i];
|
|
|
|
|
2018-12-06 19:18:14 +00:00
|
|
|
for (nd = rb_first_cached(&threads->entries); nd;
|
|
|
|
nd = rb_next(nd)) {
|
2018-01-07 16:03:56 +00:00
|
|
|
task = tasks + itask++;
|
|
|
|
|
|
|
|
task->thread = rb_entry(nd, struct thread, rb_node);
|
|
|
|
INIT_LIST_HEAD(&task->children);
|
|
|
|
INIT_LIST_HEAD(&task->list);
|
|
|
|
thread__set_priv(task->thread, task);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Iterate every task down to the unprocessed parent
|
|
|
|
* and link all in task children list. Task with no
|
|
|
|
* parent is added into 'list'.
|
|
|
|
*/
|
|
|
|
for (itask = 0; itask < nr; itask++) {
|
|
|
|
task = tasks + itask;
|
|
|
|
|
|
|
|
if (!list_empty(&task->list))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
task = tasks_list(task, machine);
|
|
|
|
if (IS_ERR(task)) {
|
|
|
|
pr_err("Error: failed to process tasks\n");
|
|
|
|
free(tasks);
|
|
|
|
return PTR_ERR(task);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (task)
|
|
|
|
list_add_tail(&task->list, &list);
|
|
|
|
}
|
|
|
|
|
|
|
|
fprintf(fp, "# %8s %8s %8s %s\n", "pid", "tid", "ppid", "comm");
|
|
|
|
|
|
|
|
list_for_each_entry(task, &list, list)
|
|
|
|
task__print_level(task, fp, 0);
|
|
|
|
|
|
|
|
free(tasks);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-12-19 17:53:53 +00:00
|
|
|
static int __cmd_report(struct report *rep)
|
2009-10-07 10:47:31 +00:00
|
|
|
{
|
2014-01-08 17:45:24 +00:00
|
|
|
int ret;
|
2012-03-08 22:47:47 +00:00
|
|
|
struct perf_session *session = rep->session;
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *pos;
|
2017-01-23 21:07:59 +00:00
|
|
|
struct perf_data *data = session->data;
|
2009-05-18 15:45:42 +00:00
|
|
|
|
2010-04-02 04:59:17 +00:00
|
|
|
signal(SIGINT, sig_handler);
|
|
|
|
|
2011-11-17 14:19:04 +00:00
|
|
|
if (rep->cpu_list) {
|
|
|
|
ret = perf_session__cpu_bitmap(session, rep->cpu_list,
|
|
|
|
rep->cpu_bitmap);
|
2015-11-27 17:32:37 +00:00
|
|
|
if (ret) {
|
|
|
|
ui__error("failed to set cpu bitmap\n");
|
2013-06-25 11:54:13 +00:00
|
|
|
return ret;
|
2015-11-27 17:32:37 +00:00
|
|
|
}
|
2017-05-26 08:17:38 +00:00
|
|
|
session->itrace_synth_opts->cpu_bitmap = rep->cpu_bitmap;
|
2011-07-04 11:57:50 +00:00
|
|
|
}
|
|
|
|
|
2016-10-14 21:23:11 +00:00
|
|
|
if (rep->show_threads) {
|
|
|
|
ret = perf_read_values_init(&rep->show_threads_values);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
2009-06-18 21:22:55 +00:00
|
|
|
|
2013-12-19 17:53:53 +00:00
|
|
|
ret = report__setup_sample_type(rep);
|
2015-11-27 17:32:37 +00:00
|
|
|
if (ret) {
|
|
|
|
/* report__setup_sample_type() already showed error message */
|
2013-06-25 11:54:13 +00:00
|
|
|
return ret;
|
2015-11-27 17:32:37 +00:00
|
|
|
}
|
2009-12-27 23:37:02 +00:00
|
|
|
|
2018-01-07 16:03:55 +00:00
|
|
|
if (rep->stats_mode)
|
|
|
|
stats_setup(rep);
|
|
|
|
|
2018-01-07 16:03:56 +00:00
|
|
|
if (rep->tasks_mode)
|
|
|
|
tasks_setup(rep);
|
|
|
|
|
2015-03-03 14:58:45 +00:00
|
|
|
ret = perf_session__process_events(session);
|
2015-11-27 17:32:37 +00:00
|
|
|
if (ret) {
|
|
|
|
ui__error("failed to process sample\n");
|
2013-06-25 11:54:13 +00:00
|
|
|
return ret;
|
2015-11-27 17:32:37 +00:00
|
|
|
}
|
2009-05-26 16:48:58 +00:00
|
|
|
|
2021-05-27 00:16:09 +00:00
|
|
|
evlist__check_mem_load_aux(session->evlist);
|
|
|
|
|
2018-01-07 16:03:55 +00:00
|
|
|
if (rep->stats_mode)
|
|
|
|
return stats_print(rep);
|
|
|
|
|
2018-01-07 16:03:56 +00:00
|
|
|
if (rep->tasks_mode)
|
|
|
|
return tasks_print(rep, stdout);
|
|
|
|
|
2014-01-08 13:10:00 +00:00
|
|
|
report__warn_kptr_restrict(rep);
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 12:53:51 +00:00
|
|
|
|
2016-06-23 14:26:15 +00:00
|
|
|
evlist__for_each_entry(session->evlist, pos)
|
2014-12-22 04:44:09 +00:00
|
|
|
rep->nr_entries += evsel__hists(pos)->nr_entries;
|
|
|
|
|
2013-12-20 05:11:13 +00:00
|
|
|
if (use_browser == 0) {
|
|
|
|
if (verbose > 3)
|
|
|
|
perf_session__fprintf(session, stdout);
|
2009-06-04 16:54:00 +00:00
|
|
|
|
2013-12-20 05:11:13 +00:00
|
|
|
if (verbose > 2)
|
|
|
|
perf_session__fprintf_dsos(session, stdout);
|
2009-05-27 07:10:38 +00:00
|
|
|
|
2013-12-20 05:11:13 +00:00
|
|
|
if (dump_trace) {
|
2021-04-27 01:37:15 +00:00
|
|
|
perf_session__fprintf_nr_events(session, stdout,
|
|
|
|
rep->skip_empty);
|
|
|
|
evlist__fprintf_nr_events(session->evlist, stdout,
|
|
|
|
rep->skip_empty);
|
2013-12-20 05:11:13 +00:00
|
|
|
return 0;
|
|
|
|
}
|
2012-08-07 13:20:46 +00:00
|
|
|
}
|
|
|
|
|
2016-02-16 14:08:26 +00:00
|
|
|
ret = report__collapse_hists(rep);
|
|
|
|
if (ret) {
|
|
|
|
ui__error("failed to process hist entry\n");
|
|
|
|
return ret;
|
|
|
|
}
|
2011-03-06 00:40:06 +00:00
|
|
|
|
2013-09-17 19:34:28 +00:00
|
|
|
if (session_done())
|
|
|
|
return 0;
|
|
|
|
|
2014-12-22 04:44:10 +00:00
|
|
|
/*
|
|
|
|
* recalculate number of entries after collapsing since it
|
|
|
|
* might be changed during the collapse phase.
|
|
|
|
*/
|
|
|
|
rep->nr_entries = 0;
|
2016-06-23 14:26:15 +00:00
|
|
|
evlist__for_each_entry(session->evlist, pos)
|
2014-12-22 04:44:10 +00:00
|
|
|
rep->nr_entries += evsel__hists(pos)->nr_entries;
|
|
|
|
|
2014-04-22 00:47:25 +00:00
|
|
|
if (rep->nr_entries == 0) {
|
2019-02-21 09:41:30 +00:00
|
|
|
ui__error("The %s data has no samples!\n", data->path);
|
2013-06-25 11:54:13 +00:00
|
|
|
return 0;
|
2010-03-05 15:51:09 +00:00
|
|
|
}
|
|
|
|
|
2014-12-22 04:44:10 +00:00
|
|
|
report__output_resort(rep);
|
2013-01-22 09:09:32 +00:00
|
|
|
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
if (rep->total_cycles_mode) {
|
2020-02-02 14:16:54 +00:00
|
|
|
int block_hpps[6] = {
|
|
|
|
PERF_HPP_REPORT__BLOCK_TOTAL_CYCLES_PCT,
|
|
|
|
PERF_HPP_REPORT__BLOCK_LBR_CYCLES,
|
|
|
|
PERF_HPP_REPORT__BLOCK_CYCLES_PCT,
|
|
|
|
PERF_HPP_REPORT__BLOCK_AVG_CYCLES,
|
|
|
|
PERF_HPP_REPORT__BLOCK_RANGE,
|
|
|
|
PERF_HPP_REPORT__BLOCK_DSO,
|
|
|
|
};
|
|
|
|
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
rep->block_reports = block_info__create_report(session->evlist,
|
2020-02-02 14:16:54 +00:00
|
|
|
rep->total_cycles,
|
|
|
|
block_hpps, 6,
|
|
|
|
&rep->nr_block_reports);
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
if (!rep->block_reports)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2014-01-08 15:22:07 +00:00
|
|
|
return report__browse_hists(rep);
|
2009-05-18 15:45:42 +00:00
|
|
|
}
|
|
|
|
|
2009-07-02 15:58:21 +00:00
|
|
|
static int
|
2014-04-07 18:55:24 +00:00
|
|
|
report_parse_callchain_opt(const struct option *opt, const char *arg, int unset)
|
2009-07-02 15:58:21 +00:00
|
|
|
{
|
2016-04-18 14:54:31 +00:00
|
|
|
struct callchain_param *callchain = opt->value;
|
2009-07-02 18:14:33 +00:00
|
|
|
|
2016-04-18 14:54:31 +00:00
|
|
|
callchain->enabled = !unset;
|
2010-01-05 13:54:45 +00:00
|
|
|
/*
|
|
|
|
* --no-call-graph
|
|
|
|
*/
|
|
|
|
if (unset) {
|
2016-04-18 14:54:31 +00:00
|
|
|
symbol_conf.use_callchain = false;
|
|
|
|
callchain->mode = CHAIN_NONE;
|
2010-01-05 13:54:45 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-04-07 18:55:24 +00:00
|
|
|
return parse_callchain_report_opt(arg);
|
2009-07-02 15:58:21 +00:00
|
|
|
}
|
|
|
|
|
2019-03-05 14:47:48 +00:00
|
|
|
static int
|
|
|
|
parse_time_quantum(const struct option *opt, const char *arg,
|
|
|
|
int unset __maybe_unused)
|
|
|
|
{
|
|
|
|
unsigned long *time_q = opt->value;
|
|
|
|
char *end;
|
|
|
|
|
|
|
|
*time_q = strtoul(arg, &end, 0);
|
|
|
|
if (end == arg)
|
|
|
|
goto parse_err;
|
|
|
|
if (*time_q == 0) {
|
|
|
|
pr_err("time quantum cannot be 0");
|
|
|
|
return -1;
|
|
|
|
}
|
2019-06-26 14:24:37 +00:00
|
|
|
end = skip_spaces(end);
|
2019-03-05 14:47:48 +00:00
|
|
|
if (*end == 0)
|
|
|
|
return 0;
|
|
|
|
if (!strcmp(end, "s")) {
|
|
|
|
*time_q *= NSEC_PER_SEC;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (!strcmp(end, "ms")) {
|
|
|
|
*time_q *= NSEC_PER_MSEC;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (!strcmp(end, "us")) {
|
|
|
|
*time_q *= NSEC_PER_USEC;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (!strcmp(end, "ns"))
|
|
|
|
return 0;
|
|
|
|
parse_err:
|
|
|
|
pr_err("Cannot parse time quantum `%s'\n", arg);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2012-12-07 05:48:05 +00:00
|
|
|
int
|
|
|
|
report_parse_ignore_callees_opt(const struct option *opt __maybe_unused,
|
|
|
|
const char *arg, int unset __maybe_unused)
|
|
|
|
{
|
|
|
|
if (arg) {
|
|
|
|
int err = regcomp(&ignore_callees_regex, arg, REG_EXTENDED);
|
|
|
|
if (err) {
|
|
|
|
char buf[BUFSIZ];
|
|
|
|
regerror(err, &ignore_callees_regex, buf, sizeof(buf));
|
|
|
|
pr_err("Invalid --ignore-callees regex: %s\n%s", arg, buf);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
have_ignore_callees = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-03-08 22:47:47 +00:00
|
|
|
static int
|
2016-12-12 13:52:10 +00:00
|
|
|
parse_branch_mode(const struct option *opt,
|
2012-09-10 22:15:03 +00:00
|
|
|
const char *str __maybe_unused, int unset)
|
2012-03-08 22:47:47 +00:00
|
|
|
{
|
2013-04-01 11:35:20 +00:00
|
|
|
int *branch_mode = opt->value;
|
|
|
|
|
|
|
|
*branch_mode = !unset;
|
2012-03-08 22:47:47 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-05-14 02:09:04 +00:00
|
|
|
static int
|
|
|
|
parse_percent_limit(const struct option *opt, const char *str,
|
|
|
|
int unset __maybe_unused)
|
|
|
|
{
|
2013-12-19 17:53:53 +00:00
|
|
|
struct report *rep = opt->value;
|
2016-01-27 15:40:50 +00:00
|
|
|
double pcnt = strtof(str, NULL);
|
2013-05-14 02:09:04 +00:00
|
|
|
|
2016-01-27 15:40:50 +00:00
|
|
|
rep->min_percent = pcnt;
|
|
|
|
callchain_param.min_percent = pcnt;
|
2013-05-14 02:09:04 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-05-07 09:50:23 +00:00
|
|
|
static int process_attr(struct perf_tool *tool __maybe_unused,
|
|
|
|
union perf_event *event,
|
|
|
|
struct evlist **pevlist)
|
|
|
|
{
|
|
|
|
u64 sample_type;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = perf_event__process_attr(tool, event, pevlist);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if we need to enable callchains based
|
|
|
|
* on events sample_type.
|
|
|
|
*/
|
2020-06-17 12:24:21 +00:00
|
|
|
sample_type = evlist__combined_sample_type(*pevlist);
|
2021-12-17 15:45:18 +00:00
|
|
|
callchain_param_setup(sample_type, perf_env__arch((*pevlist)->env));
|
2020-05-07 09:50:23 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-03-27 14:47:20 +00:00
|
|
|
int cmd_report(int argc, const char **argv)
|
2011-11-25 10:19:45 +00:00
|
|
|
{
|
2012-03-08 22:47:47 +00:00
|
|
|
struct perf_session *session;
|
2015-04-24 19:29:45 +00:00
|
|
|
struct itrace_synth_opts itrace_synth_opts = { .set = 0, };
|
2011-12-07 09:02:54 +00:00
|
|
|
struct stat st;
|
2012-03-08 22:47:47 +00:00
|
|
|
bool has_br_stack = false;
|
2013-04-01 11:35:20 +00:00
|
|
|
int branch_mode = -1;
|
2019-12-20 01:37:19 +00:00
|
|
|
int last_key = 0;
|
2014-11-13 02:05:22 +00:00
|
|
|
bool branch_call_mode = false;
|
2018-05-28 17:34:40 +00:00
|
|
|
#define CALLCHAIN_DEFAULT_OPT "graph,0.5,caller,function,percent"
|
perf tools: Replace automatic const char[] variables by statics
An automatic const char[] variable gets initialized at runtime, just
like any other automatic variable. For long strings, that uses a lot of
stack and wastes time building the string; e.g. for the "No %s
allocation events..." case one has:
444516: 48 b8 4e 6f 20 25 73 20 61 6c movabs $0x6c61207325206f4e,%rax # "No %s al"
...
444674: 48 89 45 80 mov %rax,-0x80(%rbp)
444678: 48 b8 6c 6f 63 61 74 69 6f 6e movabs $0x6e6f697461636f6c,%rax # "location"
444682: 48 89 45 88 mov %rax,-0x78(%rbp)
444686: 48 b8 20 65 76 65 6e 74 73 20 movabs $0x2073746e65766520,%rax # " events "
444690: 66 44 89 55 c4 mov %r10w,-0x3c(%rbp)
444695: 48 89 45 90 mov %rax,-0x70(%rbp)
444699: 48 b8 66 6f 75 6e 64 2e 20 20 movabs $0x20202e646e756f66,%rax
Make them all static so that the compiler just references objects in .rodata.
Committer testing:
Ok, using dwarves's codiff tool:
$ codiff --functions /tmp/perf.before ~/bin/perf
builtin-sched.c:
cmd_sched | -48
1 function changed, 48 bytes removed, diff: -48
builtin-report.c:
cmd_report | -32
1 function changed, 32 bytes removed, diff: -32
builtin-kmem.c:
cmd_kmem | -64
build_alloc_func_list | -50
2 functions changed, 114 bytes removed, diff: -114
builtin-c2c.c:
perf_c2c__report | -390
1 function changed, 390 bytes removed, diff: -390
ui/browsers/header.c:
tui__header_window | -104
1 function changed, 104 bytes removed, diff: -104
/home/acme/bin/perf:
9 functions changed, 688 bytes removed, diff: -688
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20181102230624.20064-1-linux@rasmusvillemoes.dk
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-02 23:06:23 +00:00
|
|
|
static const char report_callchain_help[] = "Display call graph (stack chain/backtrace):\n\n"
|
|
|
|
CALLCHAIN_REPORT_HELP
|
|
|
|
"\n\t\t\t\tDefault: " CALLCHAIN_DEFAULT_OPT;
|
perf tools: Improve call graph documents and help messages
The --call-graph option is complex so we should provide better guide for
users. Also change help message to be consistent with config option
names. Now perf top will show help like below:
$ perf top --call-graph
Error: option `call-graph' requires a value
Usage: perf top [<options>]
--call-graph <record_mode[,record_size],print_type,threshold[,print_limit],order,sort_key[,branch]>
setup and enables call-graph (stack chain/backtrace):
record_mode: call graph recording mode (fp|dwarf|lbr)
record_size: if record_mode is 'dwarf', max size of stack recording (<bytes>)
default: 8192 (bytes)
print_type: call graph printing style (graph|flat|fractal|none)
threshold: minimum call graph inclusion threshold (<percent>)
print_limit: maximum number of call graph entry (<number>)
order: call graph order (caller|callee)
sort_key: call graph sort key (function|address)
branch: include last branch info to call graph (branch)
Default: fp,graph,0.5,caller,function
Requested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Chandler Carruth <chandlerc@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1445524112-5201-2-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-10-22 14:28:32 +00:00
|
|
|
char callchain_default_opt[] = CALLCHAIN_DEFAULT_OPT;
|
2011-11-25 10:19:45 +00:00
|
|
|
const char * const report_usage[] = {
|
2011-12-12 15:16:56 +00:00
|
|
|
"perf report [<options>]",
|
2011-11-25 10:19:45 +00:00
|
|
|
NULL
|
|
|
|
};
|
2013-12-19 17:53:53 +00:00
|
|
|
struct report report = {
|
2011-11-28 10:30:20 +00:00
|
|
|
.tool = {
|
2011-11-25 10:19:45 +00:00
|
|
|
.sample = process_sample_event,
|
|
|
|
.mmap = perf_event__process_mmap,
|
2013-08-21 10:10:25 +00:00
|
|
|
.mmap2 = perf_event__process_mmap2,
|
2011-11-25 10:19:45 +00:00
|
|
|
.comm = perf_event__process_comm,
|
perf tools: Add PERF_RECORD_NAMESPACES to include namespaces related info
Introduce a new option to record PERF_RECORD_NAMESPACES events emitted
by the kernel when fork, clone, setns or unshare are invoked. And update
perf-record documentation with the new option to record namespace
events.
Committer notes:
Combined it with a later patch to allow printing it via 'perf report -D'
and be able to test the feature introduced in this patch. Had to move
here also perf_ns__name(), that was introduced in another later patch.
Also used PRIu64 and PRIx64 to fix the build in some enfironments wrt:
util/event.c:1129:39: error: format '%lx' expects argument of type 'long unsigned int', but argument 6 has type 'long long unsigned int' [-Werror=format=]
ret += fprintf(fp, "%u/%s: %lu/0x%lx%s", idx
^
Testing it:
# perf record --namespaces -a
^C[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 1.083 MB perf.data (423 samples) ]
#
# perf report -D
<SNIP>
3 2028902078892 0x115140 [0xa0]: PERF_RECORD_NAMESPACES 14783/14783 - nr_namespaces: 7
[0/net: 3/0xf0000081, 1/uts: 3/0xeffffffe, 2/ipc: 3/0xefffffff, 3/pid: 3/0xeffffffc,
4/user: 3/0xeffffffd, 5/mnt: 3/0xf0000000, 6/cgroup: 3/0xeffffffb]
0x1151e0 [0x30]: event: 9
.
. ... raw event: size 48 bytes
. 0000: 09 00 00 00 02 00 30 00 c4 71 82 68 0c 7f 00 00 ......0..q.h....
. 0010: a9 39 00 00 a9 39 00 00 94 28 fe 63 d8 01 00 00 .9...9...(.c....
. 0020: 03 00 00 00 00 00 00 00 ce c4 02 00 00 00 00 00 ................
<SNIP>
NAMESPACES events: 1
<SNIP>
#
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sargun Dhillon <sargun@sargun.me>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/148891930386.25309.18412039920746995488.stgit@hbathini.in.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-07 20:41:43 +00:00
|
|
|
.namespaces = perf_event__process_namespaces,
|
2020-03-25 12:45:30 +00:00
|
|
|
.cgroup = perf_event__process_cgroup,
|
2012-10-06 18:44:59 +00:00
|
|
|
.exit = perf_event__process_exit,
|
|
|
|
.fork = perf_event__process_fork,
|
2011-11-25 10:19:45 +00:00
|
|
|
.lost = perf_event__process_lost,
|
|
|
|
.read = process_read_event,
|
2020-05-07 09:50:23 +00:00
|
|
|
.attr = process_attr,
|
perf build: Use libtraceevent from the system
Remove the LIBTRACEEVENT_DYNAMIC and LIBTRACEFS_DYNAMIC make command
line variables.
If libtraceevent isn't installed or NO_LIBTRACEEVENT=1 is passed to the
build, don't compile in libtraceevent and libtracefs support.
This also disables CONFIG_TRACE that controls "perf trace".
CONFIG_LIBTRACEEVENT is used to control enablement in Build/Makefiles,
HAVE_LIBTRACEEVENT is used in C code.
Without HAVE_LIBTRACEEVENT tracepoints are disabled and as such the
commands kmem, kwork, lock, sched and timechart are removed. The
majority of commands continue to work including "perf test".
Committer notes:
Fixed up a tools/perf/util/Build reject and added:
#include <traceevent/event-parse.h>
to tools/perf/util/scripting-engines/trace-event-perl.c.
Committer testing:
$ rpm -qi libtraceevent-devel
Name : libtraceevent-devel
Version : 1.5.3
Release : 2.fc36
Architecture: x86_64
Install Date: Mon 25 Jul 2022 03:20:19 PM -03
Group : Unspecified
Size : 27728
License : LGPLv2+ and GPLv2+
Signature : RSA/SHA256, Fri 15 Apr 2022 02:11:58 PM -03, Key ID 999f7cbf38ab71f4
Source RPM : libtraceevent-1.5.3-2.fc36.src.rpm
Build Date : Fri 15 Apr 2022 10:57:01 AM -03
Build Host : buildvm-x86-05.iad2.fedoraproject.org
Packager : Fedora Project
Vendor : Fedora Project
URL : https://git.kernel.org/pub/scm/libs/libtrace/libtraceevent.git/
Bug URL : https://bugz.fedoraproject.org/libtraceevent
Summary : Development headers of libtraceevent
Description :
Development headers of libtraceevent-libs
$
Default build:
$ ldd ~/bin/perf | grep tracee
libtraceevent.so.1 => /lib64/libtraceevent.so.1 (0x00007f1dcaf8f000)
$
# perf trace -e sched:* --max-events 10
0.000 migration/0/17 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, dest_cpu: 1)
0.005 migration/0/17 sched:sched_wake_idle_without_ipi(cpu: 1)
0.011 migration/0/17 sched:sched_switch(prev_comm: "", prev_pid: 17 (migration/0), prev_state: 1, next_comm: "", next_prio: 120)
1.173 :0/0 sched:sched_wakeup(comm: "", pid: 3138 (gnome-terminal-), prio: 120)
1.180 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 3138 (gnome-terminal-), next_prio: 120)
0.156 migration/1/21 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, orig_cpu: 1, dest_cpu: 2)
0.160 migration/1/21 sched:sched_wake_idle_without_ipi(cpu: 2)
0.166 migration/1/21 sched:sched_switch(prev_comm: "", prev_pid: 21 (migration/1), prev_state: 1, next_comm: "", next_prio: 120)
1.183 :0/0 sched:sched_wakeup(comm: "", pid: 1602985 (kworker/u16:0-f), prio: 120, target_cpu: 1)
1.186 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 1602985 (kworker/u16:0-f), next_prio: 120)
#
Had to tweak tools/perf/util/setup.py to make sure the python binding
shared object links with libtraceevent if -DHAVE_LIBTRACEEVENT is
present in CFLAGS.
Building with NO_LIBTRACEEVENT=1 uncovered some more build failures:
- Make building of data-convert-bt.c to CONFIG_LIBTRACEEVENT=y
- perf-$(CONFIG_LIBTRACEEVENT) += scripts/
- bpf_kwork.o needs also to be dependent on CONFIG_LIBTRACEEVENT=y
- The python binding needed some fixups and util/trace-event.c can't be
built and linked with the python binding shared object, so remove it
in tools/perf/util/setup.py and exclude it from the list of
dependencies in the python/perf.so Makefile.perf target.
Building without libtraceevent-devel installed uncovered more build
failures:
- The python binding tools/perf/util/python.c was assuming that
traceevent/parse-events.h was always available, which was the case
when we defaulted to using the in-kernel tools/lib/traceevent/ files,
now we need to enclose it under ifdef HAVE_LIBTRACEEVENT, just like
the other parts of it that deal with tracepoints.
- We have to ifdef the rules in the Build files with
CONFIG_LIBTRACEEVENT=y to build builtin-trace.c and
tools/perf/trace/beauty/ as we only ifdef setting CONFIG_TRACE=y when
setting NO_LIBTRACEEVENT=1 in the make command line, not when we don't
detect libtraceevent-devel installed in the system. Simplification here
to avoid these two ways of disabling builtin-trace.c and not having
CONFIG_TRACE=y when libtraceevent-devel isn't installed is the clean
way.
From Athira:
<quote>
tools/perf/arch/powerpc/util/Build
-perf-y += kvm-stat.o
+perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
</quote>
Then, ditto for arm64 and s390, detected by container cross build tests.
- s/390 uses test__checkevent_tracepoint() that is now only available if
HAVE_LIBTRACEEVENT is defined, enclose the callsite with ifder HAVE_LIBTRACEEVENT.
Also from Athira:
<quote>
With this change, I could successfully compile in these environment:
- Without libtraceevent-devel installed
- With libtraceevent-devel installed
- With “make NO_LIBTRACEEVENT=1”
</quote>
Then, finally rename CONFIG_TRACEEVENT to CONFIG_LIBTRACEEVENT for
consistency with other libraries detected in tools/perf/.
Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: bpf@vger.kernel.org
Link: http://lore.kernel.org/lkml/20221205225940.3079667-3-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-12-05 22:59:39 +00:00
|
|
|
#ifdef HAVE_LIBTRACEEVENT
|
2011-11-25 10:19:45 +00:00
|
|
|
.tracing_data = perf_event__process_tracing_data,
|
perf build: Use libtraceevent from the system
Remove the LIBTRACEEVENT_DYNAMIC and LIBTRACEFS_DYNAMIC make command
line variables.
If libtraceevent isn't installed or NO_LIBTRACEEVENT=1 is passed to the
build, don't compile in libtraceevent and libtracefs support.
This also disables CONFIG_TRACE that controls "perf trace".
CONFIG_LIBTRACEEVENT is used to control enablement in Build/Makefiles,
HAVE_LIBTRACEEVENT is used in C code.
Without HAVE_LIBTRACEEVENT tracepoints are disabled and as such the
commands kmem, kwork, lock, sched and timechart are removed. The
majority of commands continue to work including "perf test".
Committer notes:
Fixed up a tools/perf/util/Build reject and added:
#include <traceevent/event-parse.h>
to tools/perf/util/scripting-engines/trace-event-perl.c.
Committer testing:
$ rpm -qi libtraceevent-devel
Name : libtraceevent-devel
Version : 1.5.3
Release : 2.fc36
Architecture: x86_64
Install Date: Mon 25 Jul 2022 03:20:19 PM -03
Group : Unspecified
Size : 27728
License : LGPLv2+ and GPLv2+
Signature : RSA/SHA256, Fri 15 Apr 2022 02:11:58 PM -03, Key ID 999f7cbf38ab71f4
Source RPM : libtraceevent-1.5.3-2.fc36.src.rpm
Build Date : Fri 15 Apr 2022 10:57:01 AM -03
Build Host : buildvm-x86-05.iad2.fedoraproject.org
Packager : Fedora Project
Vendor : Fedora Project
URL : https://git.kernel.org/pub/scm/libs/libtrace/libtraceevent.git/
Bug URL : https://bugz.fedoraproject.org/libtraceevent
Summary : Development headers of libtraceevent
Description :
Development headers of libtraceevent-libs
$
Default build:
$ ldd ~/bin/perf | grep tracee
libtraceevent.so.1 => /lib64/libtraceevent.so.1 (0x00007f1dcaf8f000)
$
# perf trace -e sched:* --max-events 10
0.000 migration/0/17 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, dest_cpu: 1)
0.005 migration/0/17 sched:sched_wake_idle_without_ipi(cpu: 1)
0.011 migration/0/17 sched:sched_switch(prev_comm: "", prev_pid: 17 (migration/0), prev_state: 1, next_comm: "", next_prio: 120)
1.173 :0/0 sched:sched_wakeup(comm: "", pid: 3138 (gnome-terminal-), prio: 120)
1.180 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 3138 (gnome-terminal-), next_prio: 120)
0.156 migration/1/21 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, orig_cpu: 1, dest_cpu: 2)
0.160 migration/1/21 sched:sched_wake_idle_without_ipi(cpu: 2)
0.166 migration/1/21 sched:sched_switch(prev_comm: "", prev_pid: 21 (migration/1), prev_state: 1, next_comm: "", next_prio: 120)
1.183 :0/0 sched:sched_wakeup(comm: "", pid: 1602985 (kworker/u16:0-f), prio: 120, target_cpu: 1)
1.186 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 1602985 (kworker/u16:0-f), next_prio: 120)
#
Had to tweak tools/perf/util/setup.py to make sure the python binding
shared object links with libtraceevent if -DHAVE_LIBTRACEEVENT is
present in CFLAGS.
Building with NO_LIBTRACEEVENT=1 uncovered some more build failures:
- Make building of data-convert-bt.c to CONFIG_LIBTRACEEVENT=y
- perf-$(CONFIG_LIBTRACEEVENT) += scripts/
- bpf_kwork.o needs also to be dependent on CONFIG_LIBTRACEEVENT=y
- The python binding needed some fixups and util/trace-event.c can't be
built and linked with the python binding shared object, so remove it
in tools/perf/util/setup.py and exclude it from the list of
dependencies in the python/perf.so Makefile.perf target.
Building without libtraceevent-devel installed uncovered more build
failures:
- The python binding tools/perf/util/python.c was assuming that
traceevent/parse-events.h was always available, which was the case
when we defaulted to using the in-kernel tools/lib/traceevent/ files,
now we need to enclose it under ifdef HAVE_LIBTRACEEVENT, just like
the other parts of it that deal with tracepoints.
- We have to ifdef the rules in the Build files with
CONFIG_LIBTRACEEVENT=y to build builtin-trace.c and
tools/perf/trace/beauty/ as we only ifdef setting CONFIG_TRACE=y when
setting NO_LIBTRACEEVENT=1 in the make command line, not when we don't
detect libtraceevent-devel installed in the system. Simplification here
to avoid these two ways of disabling builtin-trace.c and not having
CONFIG_TRACE=y when libtraceevent-devel isn't installed is the clean
way.
From Athira:
<quote>
tools/perf/arch/powerpc/util/Build
-perf-y += kvm-stat.o
+perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
</quote>
Then, ditto for arm64 and s390, detected by container cross build tests.
- s/390 uses test__checkevent_tracepoint() that is now only available if
HAVE_LIBTRACEEVENT is defined, enclose the callsite with ifder HAVE_LIBTRACEEVENT.
Also from Athira:
<quote>
With this change, I could successfully compile in these environment:
- Without libtraceevent-devel installed
- With libtraceevent-devel installed
- With “make NO_LIBTRACEEVENT=1”
</quote>
Then, finally rename CONFIG_TRACEEVENT to CONFIG_LIBTRACEEVENT for
consistency with other libraries detected in tools/perf/.
Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: bpf@vger.kernel.org
Link: http://lore.kernel.org/lkml/20221205225940.3079667-3-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-12-05 22:59:39 +00:00
|
|
|
#endif
|
2011-11-25 10:19:45 +00:00
|
|
|
.build_id = perf_event__process_build_id,
|
2015-04-24 19:29:45 +00:00
|
|
|
.id_index = perf_event__process_id_index,
|
|
|
|
.auxtrace_info = perf_event__process_auxtrace_info,
|
|
|
|
.auxtrace = perf_event__process_auxtrace,
|
2018-10-03 21:20:52 +00:00
|
|
|
.event_update = perf_event__process_event_update,
|
2018-03-14 09:22:05 +00:00
|
|
|
.feature = process_feature_event,
|
2014-07-06 12:18:21 +00:00
|
|
|
.ordered_events = true,
|
2011-11-25 10:19:45 +00:00
|
|
|
.ordering_requires_timestamps = true,
|
|
|
|
},
|
2016-05-19 14:34:06 +00:00
|
|
|
.max_stack = PERF_MAX_STACK_DEPTH,
|
2011-11-25 10:19:45 +00:00
|
|
|
.pretty_printing_style = "normal",
|
2015-09-04 14:45:44 +00:00
|
|
|
.socket_filter = -1,
|
2018-05-28 14:42:59 +00:00
|
|
|
.annotation_opts = annotation__default_options,
|
2021-04-27 01:37:16 +00:00
|
|
|
.skip_empty = true,
|
2011-11-25 10:19:45 +00:00
|
|
|
};
|
2021-07-15 16:07:14 +00:00
|
|
|
char *sort_order_help = sort_help("sort by key(s):");
|
|
|
|
char *field_order_help = sort_help("output field(s): overhead period sample ");
|
2011-11-25 10:19:45 +00:00
|
|
|
const struct option options[] = {
|
2012-10-30 03:56:02 +00:00
|
|
|
OPT_STRING('i', "input", &input_name, "file",
|
2009-05-26 07:17:18 +00:00
|
|
|
"input file name"),
|
2010-04-13 08:37:33 +00:00
|
|
|
OPT_INCR('v', "verbose", &verbose,
|
2009-05-26 22:46:14 +00:00
|
|
|
"be more verbose (show symbol address, etc)"),
|
2022-10-18 09:41:36 +00:00
|
|
|
OPT_BOOLEAN('q', "quiet", &quiet, "Do not show any warnings or messages"),
|
2009-05-26 16:48:58 +00:00
|
|
|
OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
|
|
|
|
"dump raw trace in ASCII"),
|
2018-01-07 16:03:55 +00:00
|
|
|
OPT_BOOLEAN(0, "stats", &report.stats_mode, "Display event stats"),
|
2018-01-07 16:03:56 +00:00
|
|
|
OPT_BOOLEAN(0, "tasks", &report.tasks_mode, "Display recorded tasks"),
|
2018-01-09 18:25:03 +00:00
|
|
|
OPT_BOOLEAN(0, "mmaps", &report.mmaps_mode, "Display recorded tasks memory maps"),
|
2009-11-24 14:05:15 +00:00
|
|
|
OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name,
|
|
|
|
"file", "vmlinux pathname"),
|
2018-03-16 19:27:04 +00:00
|
|
|
OPT_BOOLEAN(0, "ignore-vmlinux", &symbol_conf.ignore_vmlinux,
|
|
|
|
"don't load vmlinux even if found"),
|
2010-12-08 02:39:46 +00:00
|
|
|
OPT_STRING(0, "kallsyms", &symbol_conf.kallsyms_name,
|
|
|
|
"file", "kallsyms pathname"),
|
2015-11-12 19:50:13 +00:00
|
|
|
OPT_BOOLEAN('f', "force", &symbol_conf.force, "don't complain, do it"),
|
2009-11-24 14:05:15 +00:00
|
|
|
OPT_BOOLEAN('m', "modules", &symbol_conf.use_modules,
|
2009-07-02 06:09:46 +00:00
|
|
|
"load module symbols - WARNING: use only with -k and LIVE kernel"),
|
2009-12-15 22:04:42 +00:00
|
|
|
OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
|
2009-07-11 15:18:37 +00:00
|
|
|
"Show a column with the number of samples"),
|
2011-11-17 14:19:04 +00:00
|
|
|
OPT_BOOLEAN('T', "threads", &report.show_threads,
|
2009-08-07 11:55:24 +00:00
|
|
|
"Show per-thread event counters"),
|
2011-11-17 14:19:04 +00:00
|
|
|
OPT_STRING(0, "pretty", &report.pretty_printing_style, "key",
|
2009-08-10 13:26:32 +00:00
|
|
|
"pretty printing style key: normal raw"),
|
2022-01-23 19:18:48 +00:00
|
|
|
#ifdef HAVE_SLANG_SUPPORT
|
2011-11-17 14:19:04 +00:00
|
|
|
OPT_BOOLEAN(0, "tui", &report.use_tui, "Use the TUI interface"),
|
2022-01-23 19:18:48 +00:00
|
|
|
#endif
|
2022-07-07 20:38:36 +00:00
|
|
|
#ifdef HAVE_GTK2_SUPPORT
|
2012-03-19 18:13:29 +00:00
|
|
|
OPT_BOOLEAN(0, "gtk", &report.use_gtk, "Use the GTK2 interface"),
|
2022-07-07 20:38:36 +00:00
|
|
|
#endif
|
2011-11-17 14:19:04 +00:00
|
|
|
OPT_BOOLEAN(0, "stdio", &report.use_stdio,
|
|
|
|
"Use the stdio interface"),
|
2013-12-09 10:02:49 +00:00
|
|
|
OPT_BOOLEAN(0, "header", &report.header, "Show data header."),
|
|
|
|
OPT_BOOLEAN(0, "header-only", &report.header_only,
|
|
|
|
"Show only data header."),
|
2009-05-28 08:52:00 +00:00
|
|
|
OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
|
2021-07-15 16:07:14 +00:00
|
|
|
sort_order_help),
|
2014-03-04 01:46:34 +00:00
|
|
|
OPT_STRING('F', "fields", &field_order, "key[,keys...]",
|
2021-07-15 16:07:14 +00:00
|
|
|
field_order_help),
|
2015-10-24 15:49:25 +00:00
|
|
|
OPT_BOOLEAN(0, "show-cpu-utilization", &symbol_conf.show_cpu_utilization,
|
2010-04-19 05:32:50 +00:00
|
|
|
"Show sample percentage for different cpu modes"),
|
2015-10-24 15:49:25 +00:00
|
|
|
OPT_BOOLEAN_FLAG(0, "showcpuutilization", &symbol_conf.show_cpu_utilization,
|
|
|
|
"Show sample percentage for different cpu modes", PARSE_OPT_HIDDEN),
|
2009-06-18 05:01:03 +00:00
|
|
|
OPT_STRING('p', "parent", &parent_pattern, "regex",
|
|
|
|
"regex filter to identify parent, see: '--sort parent'"),
|
2009-12-15 22:04:42 +00:00
|
|
|
OPT_BOOLEAN('x', "exclude-other", &symbol_conf.exclude_other,
|
2009-06-18 12:32:19 +00:00
|
|
|
"Only display entries with parent-match"),
|
2016-04-18 14:54:31 +00:00
|
|
|
OPT_CALLBACK_DEFAULT('g', "call-graph", &callchain_param,
|
2015-11-09 05:45:41 +00:00
|
|
|
"print_type,threshold[,print_limit],order,sort_key[,branch],value",
|
2015-10-22 06:28:48 +00:00
|
|
|
report_callchain_help, &report_parse_callchain_opt,
|
|
|
|
callchain_default_opt),
|
2013-10-30 08:05:55 +00:00
|
|
|
OPT_BOOLEAN(0, "children", &symbol_conf.cumulate_callchain,
|
2020-01-03 18:36:43 +00:00
|
|
|
"Accumulate callchains of children and show total overhead as well. "
|
|
|
|
"Enabled by default, use --no-children to disable."),
|
perf report: Add --max-stack option to limit callchain stack scan
When callgraph data was included in the perf data file, it may take a
long time to scan all those data and merge them together especially if
the stored callchains are long and the perf data file itself is large,
like a Gbyte or so.
The callchain stack is currently limited to PERF_MAX_STACK_DEPTH (127).
This is a large value. Usually the callgraph data that developers are
most interested in are the first few levels, the rests are usually not
looked at.
This patch adds a new --max-stack option to perf-report to limit the
depth of callchain stack data to look at to reduce the time it takes for
perf-report to finish its processing. It trades the presence of trailing
stack information with faster speed.
The following table shows the elapsed time of doing perf-report on a
perf.data file of size 985,531,828 bytes.
--max_stack Elapsed Time Output data size
----------- ------------ ----------------
not set 88.0s 124,422,651
64 87.5s 116,303,213
32 87.2s 112,023,804
16 86.6s 94,326,380
8 59.9s 33,697,248
4 40.7s 10,116,637
-g none 27.1s 2,555,810
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1382107129-2010-4-git-send-email-Waiman.Long@hp.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-18 14:38:48 +00:00
|
|
|
OPT_INTEGER(0, "max-stack", &report.max_stack,
|
|
|
|
"Set the maximum stack depth when parsing the callchain, "
|
|
|
|
"anything beyond the specified depth will be ignored. "
|
2016-04-27 13:16:24 +00:00
|
|
|
"Default: kernel.perf_event_max_stack or " __stringify(PERF_MAX_STACK_DEPTH)),
|
2011-11-17 14:19:04 +00:00
|
|
|
OPT_BOOLEAN('G', "inverted", &report.inverted_callchain,
|
|
|
|
"alias for inverted call graph"),
|
2012-12-07 05:48:05 +00:00
|
|
|
OPT_CALLBACK(0, "ignore-callees", NULL, "regex",
|
|
|
|
"ignore callees of these functions in call graphs",
|
|
|
|
report_parse_ignore_callees_opt),
|
2009-12-15 22:04:40 +00:00
|
|
|
OPT_STRING('d', "dsos", &symbol_conf.dso_list_str, "dso[,dso...]",
|
2009-06-30 22:01:20 +00:00
|
|
|
"only consider symbols in these dsos"),
|
2011-11-13 18:30:08 +00:00
|
|
|
OPT_STRING('c', "comms", &symbol_conf.comm_list_str, "comm[,comm...]",
|
2009-06-30 22:01:21 +00:00
|
|
|
"only consider symbols in these comms"),
|
2015-03-24 15:52:41 +00:00
|
|
|
OPT_STRING(0, "pid", &symbol_conf.pid_list_str, "pid[,pid...]",
|
|
|
|
"only consider symbols in these pids"),
|
|
|
|
OPT_STRING(0, "tid", &symbol_conf.tid_list_str, "tid[,tid...]",
|
|
|
|
"only consider symbols in these tids"),
|
2009-12-15 22:04:40 +00:00
|
|
|
OPT_STRING('S', "symbols", &symbol_conf.sym_list_str, "symbol[,symbol...]",
|
2009-06-30 22:01:22 +00:00
|
|
|
"only consider these symbols"),
|
2012-03-16 08:50:54 +00:00
|
|
|
OPT_STRING(0, "symbol-filter", &report.symbol_filter_str, "filter",
|
|
|
|
"only show symbols that (partially) match with this filter"),
|
2009-12-15 22:04:40 +00:00
|
|
|
OPT_STRING('w', "column-widths", &symbol_conf.col_width_list_str,
|
2009-07-11 01:47:28 +00:00
|
|
|
"width[,width...]",
|
|
|
|
"don't try to adjust column width, use these fixed values"),
|
2015-03-13 12:51:54 +00:00
|
|
|
OPT_STRING_NOEMPTY('t', "field-separator", &symbol_conf.field_sep, "separator",
|
2009-07-11 01:47:28 +00:00
|
|
|
"separator for columns, no spaces will be added between "
|
|
|
|
"columns '.' is reserved."),
|
2015-11-26 07:08:20 +00:00
|
|
|
OPT_BOOLEAN('U', "hide-unresolved", &symbol_conf.hide_unresolved,
|
2009-12-29 00:48:34 +00:00
|
|
|
"Only display entries resolved to a symbol"),
|
2016-05-19 11:47:37 +00:00
|
|
|
OPT_CALLBACK(0, "symfs", NULL, "directory",
|
|
|
|
"Look for files with symbols relative to this directory",
|
|
|
|
symbol__config_symfs),
|
2011-11-13 18:30:08 +00:00
|
|
|
OPT_STRING('C', "cpu", &report.cpu_list, "cpu",
|
2011-11-17 14:19:04 +00:00
|
|
|
"list of cpus to profile"),
|
|
|
|
OPT_BOOLEAN('I', "show-info", &report.show_full_info,
|
perf tools: Make perf.data more self-descriptive (v8)
The goal of this patch is to include more information about the host
environment into the perf.data so it is more self-descriptive. Overtime,
profiles are captured on various machines and it becomes hard to track
what was recorded, on what machine and when.
This patch provides a way to solve this by extending the perf.data file
with basic information about the host machine. To add those extensions,
we leverage the feature bits capabilities of the perf.data format. The
change is backward compatible with existing perf.data files.
We define the following useful new extensions:
- HEADER_HOSTNAME: the hostname
- HEADER_OSRELEASE: the kernel release number
- HEADER_ARCH: the hw architecture
- HEADER_CPUDESC: generic CPU description
- HEADER_NRCPUS: number of online/avail cpus
- HEADER_CMDLINE: perf command line
- HEADER_VERSION: perf version
- HEADER_TOPOLOGY: cpu topology
- HEADER_EVENT_DESC: full event description (attrs)
- HEADER_CPUID: easy-to-parse low level CPU identication
The small granularity for the entries is to make it easier to extend
without breaking backward compatiblity. Many entries are provided as
ASCII strings.
Perf report/script have been modified to print the basic information as
easy-to-parse ASCII strings. Extended information about CPU and NUMA
topology may be requested with the -I option.
Thanks to David Ahern for reviewing and testing the many versions of
this patch.
$ perf report --stdio
# ========
# captured on : Mon Sep 26 15:22:14 2011
# hostname : quad
# os release : 3.1.0-rc4-tip
# perf version : 3.1.0-rc4
# arch : x86_64
# nrcpus online : 4
# nrcpus avail : 4
# cpudesc : Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
# cpuid : GenuineIntel,6,15,11
# total memory : 8105360 kB
# cmdline : /home/eranian/perfmon/official/tip/build/tools/perf/perf record date
# event : name = cycles, type = 0, config = 0x0, config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0, id = { 29, 30, 31,
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# ========
#
...
$ perf report --stdio -I
# ========
# captured on : Mon Sep 26 15:22:14 2011
# hostname : quad
# os release : 3.1.0-rc4-tip
# perf version : 3.1.0-rc4
# arch : x86_64
# nrcpus online : 4
# nrcpus avail : 4
# cpudesc : Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
# cpuid : GenuineIntel,6,15,11
# total memory : 8105360 kB
# cmdline : /home/eranian/perfmon/official/tip/build/tools/perf/perf record date
# event : name = cycles, type = 0, config = 0x0, config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0, id = { 29, 30, 31,
# sibling cores : 0-3
# sibling threads : 0
# sibling threads : 1
# sibling threads : 2
# sibling threads : 3
# node0 meminfo : total = 8320608 kB, free = 7571024 kB
# node0 cpu list : 0-3
# ========
#
...
Reviewed-by: David Ahern <dsahern@gmail.com>
Tested-by: David Ahern <dsahern@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/20110930134040.GA5575@quad
Signed-off-by: Stephane Eranian <eranian@google.com>
[ committer notes: Use --show-info in the tools as was in the docs, rename
perf_header_fprintf_info to perf_file_section__fprintf_info, fixup
conflict with f69b64f7 "perf: Support setting the disassembler style" ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-09-30 13:40:40 +00:00
|
|
|
"Display extended information about perf.data file"),
|
2018-05-28 14:42:59 +00:00
|
|
|
OPT_BOOLEAN(0, "source", &report.annotation_opts.annotate_src,
|
2011-10-06 15:48:31 +00:00
|
|
|
"Interleave source code with assembly code (default)"),
|
2018-05-28 14:42:59 +00:00
|
|
|
OPT_BOOLEAN(0, "asm-raw", &report.annotation_opts.show_asm_raw,
|
2011-10-06 15:48:31 +00:00
|
|
|
"Display raw encoding of assembly instructions (default)"),
|
2018-05-28 14:50:21 +00:00
|
|
|
OPT_STRING('M', "disassembler-style", &report.annotation_opts.disassembler_style, "disassembler style",
|
2011-09-15 21:31:41 +00:00
|
|
|
"Specify disassembler style (e.g. -M intel for intel syntax)"),
|
2020-01-07 21:04:44 +00:00
|
|
|
OPT_STRING(0, "prefix", &report.annotation_opts.prefix, "prefix",
|
|
|
|
"Add prefix to source file path names in programs (with --prefix-strip)"),
|
|
|
|
OPT_STRING(0, "prefix-strip", &report.annotation_opts.prefix_strip, "N",
|
|
|
|
"Strip first N entries of source file path name in programs (with --prefix)"),
|
2011-10-05 19:10:06 +00:00
|
|
|
OPT_BOOLEAN(0, "show-total-period", &symbol_conf.show_total_period,
|
|
|
|
"Show a column with the sum of periods"),
|
2018-03-14 09:22:05 +00:00
|
|
|
OPT_BOOLEAN_SET(0, "group", &symbol_conf.event_group, &report.group_set,
|
2013-01-22 09:09:45 +00:00
|
|
|
"Show event group information together"),
|
perf report: Allow specifying event to be used as sort key in --group output
When performing "perf report --group", it shows the event group
information together. By default, the output is sorted by the first
event in group.
It would be nice for user to select any event for sorting. This patch
introduces a new option "--group-sort-idx" to sort the output by the
event at the index n in event group.
For example,
Before:
# perf report --group --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 12K of events 'cpu/instructions,period=2000003/, cpu/cpu-cycles,period=200003/, BR_MISP_RETIRED.ALL_BRANCHES:pp, cpu/event=0xc0,umask=1,cmask=1,
# Event count (approx.): 6451235635
#
# Overhead Command Shared Object Symbol
# ................................ ......... ....................... ...................................
#
92.19% 98.68% 0.00% 93.30% mgen mgen [.] LOOP1
3.12% 0.29% 0.00% 0.16% gsd-color libglib-2.0.so.0.5600.4 [.] 0x0000000000049515
1.56% 0.03% 0.00% 0.04% gsd-color libglib-2.0.so.0.5600.4 [.] 0x00000000000494b7
1.56% 0.01% 0.00% 0.00% gsd-color libglib-2.0.so.0.5600.4 [.] 0x00000000000494ce
1.56% 0.00% 0.00% 0.00% mgen [kernel.kallsyms] [k] task_tick_fair
0.00% 0.15% 0.00% 0.04% perf [kernel.kallsyms] [k] smp_call_function_single
0.00% 0.13% 0.00% 6.08% swapper [kernel.kallsyms] [k] intel_idle
0.00% 0.03% 0.00% 0.00% gsd-color libglib-2.0.so.0.5600.4 [.] g_main_context_check
0.00% 0.03% 0.00% 0.00% swapper [kernel.kallsyms] [k] apic_timer_interrupt
...
After:
# perf report --group --stdio --group-sort-idx 3
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 12K of events 'cpu/instructions,period=2000003/, cpu/cpu-cycles,period=200003/, BR_MISP_RETIRED.ALL_BRANCHES:pp, cpu/event=0xc0,umask=1,cmask=1,
# Event count (approx.): 6451235635
#
# Overhead Command Shared Object Symbol
# ................................ ......... ....................... ...................................
#
92.19% 98.68% 0.00% 93.30% mgen mgen [.] LOOP1
0.00% 0.13% 0.00% 6.08% swapper [kernel.kallsyms] [k] intel_idle
3.12% 0.29% 0.00% 0.16% gsd-color libglib-2.0.so.0.5600.4 [.] 0x0000000000049515
0.00% 0.00% 0.00% 0.06% swapper [kernel.kallsyms] [k] hrtimer_start_range_ns
1.56% 0.03% 0.00% 0.04% gsd-color libglib-2.0.so.0.5600.4 [.] 0x00000000000494b7
0.00% 0.15% 0.00% 0.04% perf [kernel.kallsyms] [k] smp_call_function_single
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] update_curr
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] apic_timer_interrupt
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] native_apic_msr_eoi_write
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] __update_load_avg_se
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] scheduler_tick
Now the output is sorted by the fourth event in group.
v7:
---
Rebase to latest perf/core, no other change.
v4:
---
1. Update Documentation/perf-report.txt to mention
'--group-sort-idx' support multiple groups with different
amount of events and it should be used on grouped events.
2. Update __hpp__group_sort_idx(), just return when the
idx is out of limit.
3. Return failure on symbol_conf.group_sort_idx && !session->evlist->nr_groups.
So now we don't need to use together with --group.
v3:
---
Refine the code in __hpp__group_sort_idx().
Before:
for (i = 1; i < nr_members; i++) {
if (i == idx) {
ret = field_cmp(fields_a[i], fields_b[i]);
if (ret)
goto out;
}
}
After:
if (idx >= 1 && idx < nr_members) {
ret = field_cmp(fields_a[idx], fields_b[idx]);
if (ret)
goto out;
}
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200220013616.19916-2-yao.jin@linux.intel.com
[ Renamed pair_fields_alloc() to hist_entry__new_pair() and combined decl + assignment of vars ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-02-20 01:36:14 +00:00
|
|
|
OPT_INTEGER(0, "group-sort-idx", &symbol_conf.group_sort_idx,
|
|
|
|
"Sort the output by the event at the index n in group. "
|
|
|
|
"If n is invalid, sort by the first event. "
|
|
|
|
"WARNING: should be used on grouped events."),
|
2013-04-01 11:35:20 +00:00
|
|
|
OPT_CALLBACK_NOOPT('b', "branch-stack", &branch_mode, "",
|
2014-11-13 02:05:22 +00:00
|
|
|
"use branch records for per branch histogram filling",
|
|
|
|
parse_branch_mode),
|
|
|
|
OPT_BOOLEAN(0, "branch-history", &branch_call_mode,
|
|
|
|
"add last branch records to call history"),
|
2018-05-28 17:24:45 +00:00
|
|
|
OPT_STRING(0, "objdump", &report.annotation_opts.objdump_path, "path",
|
2012-09-04 10:32:30 +00:00
|
|
|
"objdump binary to use for disassembly and annotations"),
|
2013-03-25 09:18:18 +00:00
|
|
|
OPT_BOOLEAN(0, "demangle", &symbol_conf.demangle,
|
|
|
|
"Disable symbol demangling"),
|
2014-09-13 04:15:05 +00:00
|
|
|
OPT_BOOLEAN(0, "demangle-kernel", &symbol_conf.demangle_kernel,
|
|
|
|
"Enable kernel symbol demangling"),
|
2013-01-24 15:10:36 +00:00
|
|
|
OPT_BOOLEAN(0, "mem-mode", &report.mem_mode, "mem access profile"),
|
2019-03-11 14:44:58 +00:00
|
|
|
OPT_INTEGER(0, "samples", &symbol_conf.res_sample,
|
|
|
|
"Number of samples to save per histogram entry for individual browsing"),
|
2013-05-14 02:09:04 +00:00
|
|
|
OPT_CALLBACK(0, "percent-limit", &report, "percent",
|
|
|
|
"Don't show entries under that percent", parse_percent_limit),
|
2014-01-14 02:52:48 +00:00
|
|
|
OPT_CALLBACK(0, "percentage", NULL, "relative|absolute",
|
2014-02-07 03:06:07 +00:00
|
|
|
"how to display percentage of filtered entries", parse_filter_percentage),
|
2015-04-24 19:29:45 +00:00
|
|
|
OPT_CALLBACK_OPTARG(0, "itrace", &itrace_synth_opts, NULL, "opts",
|
2018-09-14 03:10:31 +00:00
|
|
|
"Instruction Tracing options\n" ITRACE_HELP,
|
2015-04-24 19:29:45 +00:00
|
|
|
itrace_parse_synth_opts),
|
2015-08-07 22:24:05 +00:00
|
|
|
OPT_BOOLEAN(0, "full-source-path", &srcline_full_filename,
|
|
|
|
"Show full source file name path for source lines"),
|
2015-08-11 10:30:49 +00:00
|
|
|
OPT_BOOLEAN(0, "show-ref-call-graph", &symbol_conf.show_ref_callgraph,
|
|
|
|
"Show callgraph from reference event"),
|
2020-03-19 20:25:13 +00:00
|
|
|
OPT_BOOLEAN(0, "stitch-lbr", &report.stitch_lbr,
|
|
|
|
"Enable LBR callgraph stitching approach"),
|
2015-09-04 14:45:44 +00:00
|
|
|
OPT_INTEGER(0, "socket-filter", &report.socket_filter,
|
|
|
|
"only show processor socket that match with this filter"),
|
2015-12-22 17:07:05 +00:00
|
|
|
OPT_BOOLEAN(0, "raw-trace", &symbol_conf.raw_trace,
|
|
|
|
"Show raw trace event output (do not use print fmt or plugins)"),
|
2016-02-24 15:13:48 +00:00
|
|
|
OPT_BOOLEAN(0, "hierarchy", &symbol_conf.report_hierarchy,
|
|
|
|
"Show entries in a hierarchy"),
|
2016-07-05 14:14:38 +00:00
|
|
|
OPT_CALLBACK_DEFAULT(0, "stdio-color", NULL, "mode",
|
|
|
|
"'always' (default), 'never' or 'auto' only applicable to --stdio mode",
|
|
|
|
stdio__config_color, "always"),
|
perf report: Add option to specify time window of interest
Add option to allow user to control analysis window. e.g., collect data
for time window and analyze a segment of interest within that window.
Committer notes:
Testing it:
Using the perf.data file captured via 'perf kmem record':
# perf report --header-only
# ========
# captured on: Tue Nov 29 16:01:53 2016
# hostname : jouet
# os release : 4.8.8-300.fc25.x86_64
# perf version : 4.9.rc6.g5a6aca
# arch : x86_64
# nrcpus online : 4
# nrcpus avail : 4
# cpudesc : Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
# cpuid : GenuineIntel,6,61,4
# total memory : 20254660 kB
# cmdline : /home/acme/bin/perf kmem record usleep 1
# event : name = kmem:kmalloc, , id = { 931980, 931981, 931982, 931983 }, type = 2, size = 112, config = 0x1b9, { sample_period, sample_freq } = 1, sample_typ
# event : name = kmem:kmalloc_node, , id = { 931984, 931985, 931986, 931987 }, type = 2, size = 112, config = 0x1b7, { sample_period, sample_freq } = 1, sampl
# event : name = kmem:kfree, , id = { 931988, 931989, 931990, 931991 }, type = 2, size = 112, config = 0x1b5, { sample_period, sample_freq } = 1, sample_type
# event : name = kmem:kmem_cache_alloc, , id = { 931992, 931993, 931994, 931995 }, type = 2, size = 112, config = 0x1b8, { sample_period, sample_freq } = 1, s
# event : name = kmem:kmem_cache_alloc_node, , id = { 931996, 931997, 931998, 931999 }, type = 2, size = 112, config = 0x1b6, { sample_period, sample_freq } =
# event : name = kmem:kmem_cache_free, , id = { 932000, 932001, 932002, 932003 }, type = 2, size = 112, config = 0x1b4, { sample_period, sample_freq } = 1, sa
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, intel_pt = 7, intel_bts = 6, uncore_arb = 13, cstate_pkg = 15, breakpoint = 5, uncore_cbox_1 = 12, power = 9, software = 1, uncore_im
# HEADER_CACHE info available, use -I to display
# missing features: HEADER_BRANCH_STACK HEADER_GROUP_DESC HEADER_AUXTRACE HEADER_STAT
# ========
#
# # Looking at just the histogram entries for the first event:
#
# perf report | head -33
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 40 of event 'kmem:kmalloc'
# Event count (approx.): 40
#
# Overhead Trace output
# ........ ...............................................................................................................
#
37.50% call_site=ffffffffb91ad3c7 ptr=0xffff88895fc05000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
10.00% call_site=ffffffffb9258416 ptr=0xffff888a1dc61f00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
7.50% call_site=ffffffffb9258416 ptr=0xffff888a2640ac00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb92759ba ptr=0xffff888a26776000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9276864 ptr=0xffff8886f6b82600 bytes_req=136 bytes_alloc=192 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb9276903 ptr=0xffff888aefcf0460 bytes_req=32 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c98a00 bytes_req=392 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c9ba00 bytes_req=504 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad301 ptr=0xffff888a31747600 bytes_req=128 bytes_alloc=128 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad511 ptr=0xffff888a9d26a2a0 bytes_req=28 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c11a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c12c0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1540 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c16e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1c20 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff888a9d26a2a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931240 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931980 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931a00 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
#
# # And then limiting using the example for 'perf kmem stat --time' used
# # in the previous changeset committer note we see that there were no
# # kmem:kmalloc in that last part of the file, but there were some
# # kmem:kmem_cache_alloc ones:
#
# perf report --time 20119.782088, --stdio
#
# Total Lost Samples: 0
#
# Samples: 0 of event 'kmem:kmalloc'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kmalloc_node'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kfree'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 8 of event 'kmem:kmem_cache_alloc'
# Event count (approx.): 8
#
# Overhead Trace output
# ........ ..................................................................................................................
#
75.00% call_site=ffffffffb9333b42 ptr=0xffff888bdf1a39c0 bytes_req=48 bytes_alloc=48 gfp_flags=GFP_NOFS|__GFP_ZERO
12.50% call_site=ffffffffb90ad33a ptr=0xffff8889f071f6e0 bytes_req=160 bytes_alloc=160 gfp_flags=GFP_ATOMIC|__GFP_NOTRACK
12.50% call_site=ffffffffb9287cc1 ptr=0xffff8889b12722d8 bytes_req=104 bytes_alloc=104 gfp_flags=GFP_NOFS|__GFP_ZERO
#
Signed-off-by: David Ahern <dsahern@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480439746-42695-7-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-29 17:15:46 +00:00
|
|
|
OPT_STRING(0, "time", &report.time_str, "str",
|
|
|
|
"Time span of interest (start,stop)"),
|
2017-03-25 20:34:27 +00:00
|
|
|
OPT_BOOLEAN(0, "inline", &symbol_conf.inline_name,
|
|
|
|
"Show inline function"),
|
2018-08-04 13:05:21 +00:00
|
|
|
OPT_CALLBACK(0, "percent-type", &report.annotation_opts, "local-period",
|
|
|
|
"Set percent type local/global-period/hits",
|
|
|
|
annotate_parse_percent_type),
|
2019-03-05 14:47:47 +00:00
|
|
|
OPT_BOOLEAN(0, "ns", &symbol_conf.nanosecs, "Show times in nanosecs"),
|
2019-03-05 14:47:48 +00:00
|
|
|
OPT_CALLBACK(0, "time-quantum", &symbol_conf.time_quantum, "time (ms|us|ns|s)",
|
|
|
|
"Set time quantum for time sort key (default 100ms)",
|
|
|
|
parse_time_quantum),
|
perf report: Add --switch-on/--switch-off events
Since 'perf top' shares the histogram browser with 'perf report', then
the same explanation in the previous cset applies.
An additional example uses a pair of SDT events available for systemtap:
# perf probe --exec=/usr/bin/stap '%*:*'
Added new events:
sdt_stap:benchmark__thread__start (on %* in /usr/bin/stap)
sdt_stap:benchmark (on %* in /usr/bin/stap)
sdt_stap:benchmark__thread__end (on %* in /usr/bin/stap)
sdt_stap:pass6__start (on %* in /usr/bin/stap)
sdt_stap:pass6__end (on %* in /usr/bin/stap)
sdt_stap:pass5__start (on %* in /usr/bin/stap)
sdt_stap:pass5__end (on %* in /usr/bin/stap)
sdt_stap:pass0__start (on %* in /usr/bin/stap)
sdt_stap:pass0__end (on %* in /usr/bin/stap)
sdt_stap:pass1a__start (on %* in /usr/bin/stap)
sdt_stap:pass1b__start (on %* in /usr/bin/stap)
sdt_stap:pass1__end (on %* in /usr/bin/stap)
sdt_stap:pass2__start (on %* in /usr/bin/stap)
sdt_stap:pass2__end (on %* in /usr/bin/stap)
sdt_stap:pass3__start (on %* in /usr/bin/stap)
sdt_stap:pass3__end (on %* in /usr/bin/stap)
sdt_stap:pass4__start (on %* in /usr/bin/stap)
sdt_stap:pass4__end (on %* in /usr/bin/stap)
sdt_stap:benchmark__start (on %* in /usr/bin/stap)
sdt_stap:benchmark__end (on %* in /usr/bin/stap)
sdt_stap:cache__get (on %* in /usr/bin/stap)
sdt_stap:cache__clean (on %* in /usr/bin/stap)
sdt_stap:cache__add__module (on %* in /usr/bin/stap)
sdt_stap:cache__add__source (on %* in /usr/bin/stap)
sdt_stap:stap_system__complete (on %* in /usr/bin/stap)
sdt_stap:stap_system__start (on %* in /usr/bin/stap)
sdt_stap:stap_system__spawn (on %* in /usr/bin/stap)
sdt_stap:stap_system__fork (on %* in /usr/bin/stap)
sdt_stap:intern_string (on %* in /usr/bin/stap)
sdt_stap:client__start (on %* in /usr/bin/stap)
sdt_stap:client__end (on %* in /usr/bin/stap)
You can now use it in all perf tools, such as:
perf record -e sdt_stap:client__end -aR sleep 1
#
From these we're use the two below to run systemtap's test suite:
# perf record -e sdt_stap:pass2__*,cycles:P make installcheck > /dev/null
^C[ perf record: Woken up 8 times to write data ]
[ perf record: Captured and wrote 2.691 MB perf.data (39638 samples) ]
Terminated
# perf script | grep sdt_stap
stap 28979 [000] 19424.302660: sdt_stap:pass2__start: (561b9a537de3) arg1=140730364262544
stap 28979 [000] 19424.333083: sdt_stap:pass2__end: (561b9a53a9e1) arg1=140730364262544
stap 29045 [006] 19424.933460: sdt_stap:pass2__start: (563edddcede3) arg1=140722674883152
stap 29045 [006] 19424.963794: sdt_stap:pass2__end: (563edddd19e1) arg1=140722674883152
# perf script | grep cycles | wc -l
39634
#
Looking at the whole perf.data file:
[root@quaco testsuite]# perf report | grep cycles:P -A25
# Samples: 39K of event 'cycles:P'
# Event count (approx.): 34044267368
#
# Overhead Command Shared Object Symbol
# ........ ....... .................... ................................
#
3.50% cc1 cc1 [.] ht_lookup_with_hash
3.04% cc1 cc1 [.] _cpp_lex_token
2.11% cc1 cc1 [.] ggc_internal_alloc
1.83% cc1 cc1 [.] cpp_get_token_with_location
1.68% cc1 libc-2.29.so [.] _int_malloc
1.41% cc1 cc1 [.] linemap_position_for_column
1.25% cc1 cc1 [.] ggc_internal_cleared_alloc
1.20% cc1 cc1 [.] c_lex_with_flags
1.18% cc1 cc1 [.] get_combined_adhoc_loc
1.05% cc1 libc-2.29.so [.] malloc
1.01% cc1 libc-2.29.so [.] _int_free
0.96% stap stap [.] std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Identity, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, stringtable_hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, true, true> >::_M_insert<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__detail::_AllocNode<std::allocator<std::__detail::_Hash_node<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, true> > > >
0.78% stap stap [.] lexer::scan
0.74% cc1 cc1 [.] _cpp_lex_direct
0.70% cc1 cc1 [.] pop_scope
0.70% cc1 cc1 [.] c_parser_declspecs
0.69% stap libc-2.29.so [.] _int_malloc
0.68% cc1 cc1 [.] htab_find_slot
0.68% cc1 [kernel.vmlinux] [k] prepare_exit_to_usermode
0.64% cc1 [kernel.vmlinux] [k] clear_page_erms
[root@quaco testsuite]#
And now only what happens in slices demarcated by those start/end SDT
events:
[root@quaco testsuite]# perf report --switch-on=sdt_stap:pass2__start --switch-off=sdt_stap:pass2__end | grep cycles:P -A100
# Samples: 240 of event 'cycles:P'
# Event count (approx.): 206491934
#
# Overhead Command Shared Object Symbol
# ........ ....... ................... ................................................
#
38.99% stap stap [.] systemtap_session::register_library_aliases
19.47% stap stap [.] match_key::operator<
15.01% stap libc-2.29.so [.] __memcmp_avx2_movbe
5.19% stap libc-2.29.so [.] _int_malloc
2.50% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_insert_and_rebalance
2.30% stap stap [.] match_node::build_no_more
2.07% stap libc-2.29.so [.] malloc
1.66% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::find
1.66% stap stap [.] match_node::bind
1.58% stap [kernel.vmlinux] [k] prepare_exit_to_usermode
1.17% stap [kernel.vmlinux] [k] native_irq_return_iret
0.87% stap stap [.] 0x0000000000032ec4
0.77% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_increment
0.47% stap stap [.] std::vector<derived_probe_builder*, std::allocator<derived_probe_builder*> >::_M_realloc_insert<derived_probe_builder* const&>
0.47% stap [kernel.vmlinux] [k] get_page_from_freelist
0.47% stap [kernel.vmlinux] [k] swapgs_restore_regs_and_return_to_usermode
0.47% stap [kernel.vmlinux] [k] do_user_addr_fault
0.46% stap [kernel.vmlinux] [k] __pagevec_lru_add_fn
0.46% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::_M_emplace_unique<std::pair<match_key, match_node*> >
0.42% stap libstdc++.so.6.0.26 [.] 0x00000000000c18fa
0.40% stap [kernel.vmlinux] [k] interrupt_entry
0.40% stap [kernel.vmlinux] [k] update_load_avg
0.40% stap [kernel.vmlinux] [k] __intel_pmu_disable_all
0.40% stap [kernel.vmlinux] [k] clear_page_erms
0.39% stap [kernel.vmlinux] [k] __mod_node_page_state
0.39% stap [kernel.vmlinux] [k] error_entry
0.39% stap [kernel.vmlinux] [k] sync_regs
0.38% stap [kernel.vmlinux] [k] __handle_mm_fault
0.38% stap stap [.] derive_probes
#
# (Tip: System-wide collection from all CPUs: perf record -a)
#
[root@quaco testsuite]#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: William Cohen <wcohen@redhat.com>
Link: https://lkml.kernel.org/n/tip-408hvumcnyn93a0auihnawew@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-15 21:18:58 +00:00
|
|
|
OPTS_EVSWITCH(&report.evswitch),
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
OPT_BOOLEAN(0, "total-cycles", &report.total_cycles_mode,
|
|
|
|
"Sort all blocks by 'Sampled Cycles%'"),
|
2021-02-19 07:00:05 +00:00
|
|
|
OPT_BOOLEAN(0, "disable-order", &report.disable_order,
|
|
|
|
"Disable raw trace ordering"),
|
2021-04-27 01:37:15 +00:00
|
|
|
OPT_BOOLEAN(0, "skip-empty", &report.skip_empty,
|
|
|
|
"Do not display empty (or dummy) events in the output"),
|
2009-05-26 07:17:18 +00:00
|
|
|
OPT_END()
|
2011-11-25 10:19:45 +00:00
|
|
|
};
|
2017-01-23 21:07:59 +00:00
|
|
|
struct perf_data data = {
|
2013-10-15 14:27:32 +00:00
|
|
|
.mode = PERF_DATA_MODE_READ,
|
|
|
|
};
|
2014-10-09 19:16:00 +00:00
|
|
|
int ret = hists__init();
|
2018-11-30 13:54:56 +00:00
|
|
|
char sort_tmp[128];
|
2014-10-09 19:16:00 +00:00
|
|
|
|
|
|
|
if (ret < 0)
|
2021-07-15 16:07:14 +00:00
|
|
|
goto exit;
|
2009-05-26 07:17:18 +00:00
|
|
|
|
2017-01-24 16:44:10 +00:00
|
|
|
ret = perf_config(report__config, &report);
|
|
|
|
if (ret)
|
2021-07-15 16:07:14 +00:00
|
|
|
goto exit;
|
2013-01-22 09:09:46 +00:00
|
|
|
|
2009-12-15 22:04:40 +00:00
|
|
|
argc = parse_options(argc, argv, options, report_usage, 0);
|
2015-12-10 03:00:56 +00:00
|
|
|
if (argc) {
|
|
|
|
/*
|
|
|
|
* Special case: if there's an argument left then assume that
|
|
|
|
* it's a symbol filter:
|
|
|
|
*/
|
|
|
|
if (argc > 1)
|
|
|
|
usage_with_options(report_usage, options);
|
|
|
|
|
|
|
|
report.symbol_filter_str = argv[0];
|
|
|
|
}
|
2009-12-15 22:04:40 +00:00
|
|
|
|
2021-07-15 16:07:14 +00:00
|
|
|
if (annotate_check_args(&report.annotation_opts) < 0) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto exit;
|
|
|
|
}
|
2020-01-07 21:04:44 +00:00
|
|
|
|
2018-01-09 18:25:03 +00:00
|
|
|
if (report.mmaps_mode)
|
|
|
|
report.tasks_mode = true;
|
|
|
|
|
2021-02-19 07:00:05 +00:00
|
|
|
if (dump_trace && report.disable_order)
|
2020-08-27 13:48:29 +00:00
|
|
|
report.tool.ordered_events = false;
|
|
|
|
|
2017-02-17 08:17:39 +00:00
|
|
|
if (quiet)
|
|
|
|
perf_quiet_option();
|
|
|
|
|
2021-10-18 13:48:41 +00:00
|
|
|
ret = symbol__validate_sym_arguments();
|
|
|
|
if (ret)
|
2021-07-15 16:07:14 +00:00
|
|
|
goto exit;
|
2015-06-19 08:57:33 +00:00
|
|
|
|
2011-11-17 14:19:04 +00:00
|
|
|
if (report.inverted_callchain)
|
2011-06-07 15:49:46 +00:00
|
|
|
callchain_param.order = ORDER_CALLER;
|
2015-10-22 07:45:46 +00:00
|
|
|
if (symbol_conf.cumulate_callchain && !callchain_param.order_set)
|
|
|
|
callchain_param.order = ORDER_CALLER;
|
2011-06-07 15:49:46 +00:00
|
|
|
|
2020-04-01 10:16:05 +00:00
|
|
|
if ((itrace_synth_opts.callchain || itrace_synth_opts.add_callchain) &&
|
2015-09-25 13:15:46 +00:00
|
|
|
(int)itrace_synth_opts.callchain_sz > report.max_stack)
|
|
|
|
report.max_stack = itrace_synth_opts.callchain_sz;
|
|
|
|
|
2012-10-30 03:56:02 +00:00
|
|
|
if (!input_name || !strlen(input_name)) {
|
2011-12-07 09:02:54 +00:00
|
|
|
if (!fstat(STDIN_FILENO, &st) && S_ISFIFO(st.st_mode))
|
2012-10-30 03:56:02 +00:00
|
|
|
input_name = "-";
|
2011-12-07 09:02:54 +00:00
|
|
|
else
|
2012-10-30 03:56:02 +00:00
|
|
|
input_name = "perf.data";
|
2011-12-07 09:02:54 +00:00
|
|
|
}
|
2013-02-03 06:38:21 +00:00
|
|
|
|
2019-02-21 09:41:30 +00:00
|
|
|
data.path = input_name;
|
|
|
|
data.force = symbol_conf.force;
|
2013-10-15 14:27:32 +00:00
|
|
|
|
2013-02-03 06:38:21 +00:00
|
|
|
repeat:
|
2021-07-19 22:31:49 +00:00
|
|
|
session = perf_session__new(&data, &report.tool);
|
2021-07-15 16:07:14 +00:00
|
|
|
if (IS_ERR(session)) {
|
|
|
|
ret = PTR_ERR(session);
|
|
|
|
goto exit;
|
|
|
|
}
|
2012-03-08 22:47:47 +00:00
|
|
|
|
perf report: Add --switch-on/--switch-off events
Since 'perf top' shares the histogram browser with 'perf report', then
the same explanation in the previous cset applies.
An additional example uses a pair of SDT events available for systemtap:
# perf probe --exec=/usr/bin/stap '%*:*'
Added new events:
sdt_stap:benchmark__thread__start (on %* in /usr/bin/stap)
sdt_stap:benchmark (on %* in /usr/bin/stap)
sdt_stap:benchmark__thread__end (on %* in /usr/bin/stap)
sdt_stap:pass6__start (on %* in /usr/bin/stap)
sdt_stap:pass6__end (on %* in /usr/bin/stap)
sdt_stap:pass5__start (on %* in /usr/bin/stap)
sdt_stap:pass5__end (on %* in /usr/bin/stap)
sdt_stap:pass0__start (on %* in /usr/bin/stap)
sdt_stap:pass0__end (on %* in /usr/bin/stap)
sdt_stap:pass1a__start (on %* in /usr/bin/stap)
sdt_stap:pass1b__start (on %* in /usr/bin/stap)
sdt_stap:pass1__end (on %* in /usr/bin/stap)
sdt_stap:pass2__start (on %* in /usr/bin/stap)
sdt_stap:pass2__end (on %* in /usr/bin/stap)
sdt_stap:pass3__start (on %* in /usr/bin/stap)
sdt_stap:pass3__end (on %* in /usr/bin/stap)
sdt_stap:pass4__start (on %* in /usr/bin/stap)
sdt_stap:pass4__end (on %* in /usr/bin/stap)
sdt_stap:benchmark__start (on %* in /usr/bin/stap)
sdt_stap:benchmark__end (on %* in /usr/bin/stap)
sdt_stap:cache__get (on %* in /usr/bin/stap)
sdt_stap:cache__clean (on %* in /usr/bin/stap)
sdt_stap:cache__add__module (on %* in /usr/bin/stap)
sdt_stap:cache__add__source (on %* in /usr/bin/stap)
sdt_stap:stap_system__complete (on %* in /usr/bin/stap)
sdt_stap:stap_system__start (on %* in /usr/bin/stap)
sdt_stap:stap_system__spawn (on %* in /usr/bin/stap)
sdt_stap:stap_system__fork (on %* in /usr/bin/stap)
sdt_stap:intern_string (on %* in /usr/bin/stap)
sdt_stap:client__start (on %* in /usr/bin/stap)
sdt_stap:client__end (on %* in /usr/bin/stap)
You can now use it in all perf tools, such as:
perf record -e sdt_stap:client__end -aR sleep 1
#
From these we're use the two below to run systemtap's test suite:
# perf record -e sdt_stap:pass2__*,cycles:P make installcheck > /dev/null
^C[ perf record: Woken up 8 times to write data ]
[ perf record: Captured and wrote 2.691 MB perf.data (39638 samples) ]
Terminated
# perf script | grep sdt_stap
stap 28979 [000] 19424.302660: sdt_stap:pass2__start: (561b9a537de3) arg1=140730364262544
stap 28979 [000] 19424.333083: sdt_stap:pass2__end: (561b9a53a9e1) arg1=140730364262544
stap 29045 [006] 19424.933460: sdt_stap:pass2__start: (563edddcede3) arg1=140722674883152
stap 29045 [006] 19424.963794: sdt_stap:pass2__end: (563edddd19e1) arg1=140722674883152
# perf script | grep cycles | wc -l
39634
#
Looking at the whole perf.data file:
[root@quaco testsuite]# perf report | grep cycles:P -A25
# Samples: 39K of event 'cycles:P'
# Event count (approx.): 34044267368
#
# Overhead Command Shared Object Symbol
# ........ ....... .................... ................................
#
3.50% cc1 cc1 [.] ht_lookup_with_hash
3.04% cc1 cc1 [.] _cpp_lex_token
2.11% cc1 cc1 [.] ggc_internal_alloc
1.83% cc1 cc1 [.] cpp_get_token_with_location
1.68% cc1 libc-2.29.so [.] _int_malloc
1.41% cc1 cc1 [.] linemap_position_for_column
1.25% cc1 cc1 [.] ggc_internal_cleared_alloc
1.20% cc1 cc1 [.] c_lex_with_flags
1.18% cc1 cc1 [.] get_combined_adhoc_loc
1.05% cc1 libc-2.29.so [.] malloc
1.01% cc1 libc-2.29.so [.] _int_free
0.96% stap stap [.] std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Identity, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, stringtable_hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, true, true> >::_M_insert<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__detail::_AllocNode<std::allocator<std::__detail::_Hash_node<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, true> > > >
0.78% stap stap [.] lexer::scan
0.74% cc1 cc1 [.] _cpp_lex_direct
0.70% cc1 cc1 [.] pop_scope
0.70% cc1 cc1 [.] c_parser_declspecs
0.69% stap libc-2.29.so [.] _int_malloc
0.68% cc1 cc1 [.] htab_find_slot
0.68% cc1 [kernel.vmlinux] [k] prepare_exit_to_usermode
0.64% cc1 [kernel.vmlinux] [k] clear_page_erms
[root@quaco testsuite]#
And now only what happens in slices demarcated by those start/end SDT
events:
[root@quaco testsuite]# perf report --switch-on=sdt_stap:pass2__start --switch-off=sdt_stap:pass2__end | grep cycles:P -A100
# Samples: 240 of event 'cycles:P'
# Event count (approx.): 206491934
#
# Overhead Command Shared Object Symbol
# ........ ....... ................... ................................................
#
38.99% stap stap [.] systemtap_session::register_library_aliases
19.47% stap stap [.] match_key::operator<
15.01% stap libc-2.29.so [.] __memcmp_avx2_movbe
5.19% stap libc-2.29.so [.] _int_malloc
2.50% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_insert_and_rebalance
2.30% stap stap [.] match_node::build_no_more
2.07% stap libc-2.29.so [.] malloc
1.66% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::find
1.66% stap stap [.] match_node::bind
1.58% stap [kernel.vmlinux] [k] prepare_exit_to_usermode
1.17% stap [kernel.vmlinux] [k] native_irq_return_iret
0.87% stap stap [.] 0x0000000000032ec4
0.77% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_increment
0.47% stap stap [.] std::vector<derived_probe_builder*, std::allocator<derived_probe_builder*> >::_M_realloc_insert<derived_probe_builder* const&>
0.47% stap [kernel.vmlinux] [k] get_page_from_freelist
0.47% stap [kernel.vmlinux] [k] swapgs_restore_regs_and_return_to_usermode
0.47% stap [kernel.vmlinux] [k] do_user_addr_fault
0.46% stap [kernel.vmlinux] [k] __pagevec_lru_add_fn
0.46% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::_M_emplace_unique<std::pair<match_key, match_node*> >
0.42% stap libstdc++.so.6.0.26 [.] 0x00000000000c18fa
0.40% stap [kernel.vmlinux] [k] interrupt_entry
0.40% stap [kernel.vmlinux] [k] update_load_avg
0.40% stap [kernel.vmlinux] [k] __intel_pmu_disable_all
0.40% stap [kernel.vmlinux] [k] clear_page_erms
0.39% stap [kernel.vmlinux] [k] __mod_node_page_state
0.39% stap [kernel.vmlinux] [k] error_entry
0.39% stap [kernel.vmlinux] [k] sync_regs
0.38% stap [kernel.vmlinux] [k] __handle_mm_fault
0.38% stap stap [.] derive_probes
#
# (Tip: System-wide collection from all CPUs: perf record -a)
#
[root@quaco testsuite]#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: William Cohen <wcohen@redhat.com>
Link: https://lkml.kernel.org/n/tip-408hvumcnyn93a0auihnawew@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-15 21:18:58 +00:00
|
|
|
ret = evswitch__init(&report.evswitch, session->evlist, stderr);
|
|
|
|
if (ret)
|
2021-07-15 16:07:14 +00:00
|
|
|
goto exit;
|
perf report: Add --switch-on/--switch-off events
Since 'perf top' shares the histogram browser with 'perf report', then
the same explanation in the previous cset applies.
An additional example uses a pair of SDT events available for systemtap:
# perf probe --exec=/usr/bin/stap '%*:*'
Added new events:
sdt_stap:benchmark__thread__start (on %* in /usr/bin/stap)
sdt_stap:benchmark (on %* in /usr/bin/stap)
sdt_stap:benchmark__thread__end (on %* in /usr/bin/stap)
sdt_stap:pass6__start (on %* in /usr/bin/stap)
sdt_stap:pass6__end (on %* in /usr/bin/stap)
sdt_stap:pass5__start (on %* in /usr/bin/stap)
sdt_stap:pass5__end (on %* in /usr/bin/stap)
sdt_stap:pass0__start (on %* in /usr/bin/stap)
sdt_stap:pass0__end (on %* in /usr/bin/stap)
sdt_stap:pass1a__start (on %* in /usr/bin/stap)
sdt_stap:pass1b__start (on %* in /usr/bin/stap)
sdt_stap:pass1__end (on %* in /usr/bin/stap)
sdt_stap:pass2__start (on %* in /usr/bin/stap)
sdt_stap:pass2__end (on %* in /usr/bin/stap)
sdt_stap:pass3__start (on %* in /usr/bin/stap)
sdt_stap:pass3__end (on %* in /usr/bin/stap)
sdt_stap:pass4__start (on %* in /usr/bin/stap)
sdt_stap:pass4__end (on %* in /usr/bin/stap)
sdt_stap:benchmark__start (on %* in /usr/bin/stap)
sdt_stap:benchmark__end (on %* in /usr/bin/stap)
sdt_stap:cache__get (on %* in /usr/bin/stap)
sdt_stap:cache__clean (on %* in /usr/bin/stap)
sdt_stap:cache__add__module (on %* in /usr/bin/stap)
sdt_stap:cache__add__source (on %* in /usr/bin/stap)
sdt_stap:stap_system__complete (on %* in /usr/bin/stap)
sdt_stap:stap_system__start (on %* in /usr/bin/stap)
sdt_stap:stap_system__spawn (on %* in /usr/bin/stap)
sdt_stap:stap_system__fork (on %* in /usr/bin/stap)
sdt_stap:intern_string (on %* in /usr/bin/stap)
sdt_stap:client__start (on %* in /usr/bin/stap)
sdt_stap:client__end (on %* in /usr/bin/stap)
You can now use it in all perf tools, such as:
perf record -e sdt_stap:client__end -aR sleep 1
#
From these we're use the two below to run systemtap's test suite:
# perf record -e sdt_stap:pass2__*,cycles:P make installcheck > /dev/null
^C[ perf record: Woken up 8 times to write data ]
[ perf record: Captured and wrote 2.691 MB perf.data (39638 samples) ]
Terminated
# perf script | grep sdt_stap
stap 28979 [000] 19424.302660: sdt_stap:pass2__start: (561b9a537de3) arg1=140730364262544
stap 28979 [000] 19424.333083: sdt_stap:pass2__end: (561b9a53a9e1) arg1=140730364262544
stap 29045 [006] 19424.933460: sdt_stap:pass2__start: (563edddcede3) arg1=140722674883152
stap 29045 [006] 19424.963794: sdt_stap:pass2__end: (563edddd19e1) arg1=140722674883152
# perf script | grep cycles | wc -l
39634
#
Looking at the whole perf.data file:
[root@quaco testsuite]# perf report | grep cycles:P -A25
# Samples: 39K of event 'cycles:P'
# Event count (approx.): 34044267368
#
# Overhead Command Shared Object Symbol
# ........ ....... .................... ................................
#
3.50% cc1 cc1 [.] ht_lookup_with_hash
3.04% cc1 cc1 [.] _cpp_lex_token
2.11% cc1 cc1 [.] ggc_internal_alloc
1.83% cc1 cc1 [.] cpp_get_token_with_location
1.68% cc1 libc-2.29.so [.] _int_malloc
1.41% cc1 cc1 [.] linemap_position_for_column
1.25% cc1 cc1 [.] ggc_internal_cleared_alloc
1.20% cc1 cc1 [.] c_lex_with_flags
1.18% cc1 cc1 [.] get_combined_adhoc_loc
1.05% cc1 libc-2.29.so [.] malloc
1.01% cc1 libc-2.29.so [.] _int_free
0.96% stap stap [.] std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Identity, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, stringtable_hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, true, true> >::_M_insert<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__detail::_AllocNode<std::allocator<std::__detail::_Hash_node<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, true> > > >
0.78% stap stap [.] lexer::scan
0.74% cc1 cc1 [.] _cpp_lex_direct
0.70% cc1 cc1 [.] pop_scope
0.70% cc1 cc1 [.] c_parser_declspecs
0.69% stap libc-2.29.so [.] _int_malloc
0.68% cc1 cc1 [.] htab_find_slot
0.68% cc1 [kernel.vmlinux] [k] prepare_exit_to_usermode
0.64% cc1 [kernel.vmlinux] [k] clear_page_erms
[root@quaco testsuite]#
And now only what happens in slices demarcated by those start/end SDT
events:
[root@quaco testsuite]# perf report --switch-on=sdt_stap:pass2__start --switch-off=sdt_stap:pass2__end | grep cycles:P -A100
# Samples: 240 of event 'cycles:P'
# Event count (approx.): 206491934
#
# Overhead Command Shared Object Symbol
# ........ ....... ................... ................................................
#
38.99% stap stap [.] systemtap_session::register_library_aliases
19.47% stap stap [.] match_key::operator<
15.01% stap libc-2.29.so [.] __memcmp_avx2_movbe
5.19% stap libc-2.29.so [.] _int_malloc
2.50% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_insert_and_rebalance
2.30% stap stap [.] match_node::build_no_more
2.07% stap libc-2.29.so [.] malloc
1.66% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::find
1.66% stap stap [.] match_node::bind
1.58% stap [kernel.vmlinux] [k] prepare_exit_to_usermode
1.17% stap [kernel.vmlinux] [k] native_irq_return_iret
0.87% stap stap [.] 0x0000000000032ec4
0.77% stap libstdc++.so.6.0.26 [.] std::_Rb_tree_increment
0.47% stap stap [.] std::vector<derived_probe_builder*, std::allocator<derived_probe_builder*> >::_M_realloc_insert<derived_probe_builder* const&>
0.47% stap [kernel.vmlinux] [k] get_page_from_freelist
0.47% stap [kernel.vmlinux] [k] swapgs_restore_regs_and_return_to_usermode
0.47% stap [kernel.vmlinux] [k] do_user_addr_fault
0.46% stap [kernel.vmlinux] [k] __pagevec_lru_add_fn
0.46% stap stap [.] std::_Rb_tree<match_key, std::pair<match_key const, match_node*>, std::_Select1st<std::pair<match_key const, match_node*> >, std::less<match_key>, std::allocator<std::pair<match_key const, match_node*> > >::_M_emplace_unique<std::pair<match_key, match_node*> >
0.42% stap libstdc++.so.6.0.26 [.] 0x00000000000c18fa
0.40% stap [kernel.vmlinux] [k] interrupt_entry
0.40% stap [kernel.vmlinux] [k] update_load_avg
0.40% stap [kernel.vmlinux] [k] __intel_pmu_disable_all
0.40% stap [kernel.vmlinux] [k] clear_page_erms
0.39% stap [kernel.vmlinux] [k] __mod_node_page_state
0.39% stap [kernel.vmlinux] [k] error_entry
0.39% stap [kernel.vmlinux] [k] sync_regs
0.38% stap [kernel.vmlinux] [k] __handle_mm_fault
0.38% stap stap [.] derive_probes
#
# (Tip: System-wide collection from all CPUs: perf record -a)
#
[root@quaco testsuite]#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: William Cohen <wcohen@redhat.com>
Link: https://lkml.kernel.org/n/tip-408hvumcnyn93a0auihnawew@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-15 21:18:58 +00:00
|
|
|
|
perf report: Implement perf.data record decompression
zstd_init(, comp_level = 0) initializes decompression part of API only
hat now consists of zstd_decompress_stream() function.
The perf.data PERF_RECORD_COMPRESSED records are decompressed using
zstd_decompress_stream() function into a linked list of mmaped memory
regions of mmap_comp_len size (struct decomp).
After decompression of one COMPRESSED record its content is iterated and
fetched for usual processing. The mmaped memory regions with
decompressed events are kept in the linked list till the tool process
termination.
When dumping raw records (e.g., perf report -D --header) file offsets of
events from compressed records are printed as zero.
Committer notes:
Since now we have support for processing PERF_RECORD_COMPRESSED, we see
none, in raw form, like we saw in the previous patch commiter notes,
they were decompressed into the usual PERF_RECORD_{FORK,MMAP,COMM,etc}
records, we only see the stats for those PERF_RECORD_COMPRESSED events,
and since I used the file generated in the commiter notes for the
previous patch, there they are, 2 compressed records:
$ perf report --header-only | grep cmdline
# cmdline : /home/acme/bin/perf record -z2 sleep 1
$ perf report -D | grep COMPRESS
COMPRESSED events: 2
COMPRESSED events: 0
$ perf report --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 15 of event 'cycles:u'
# Event count (approx.): 962227
#
# Overhead Command Shared Object Symbol
# ........ ....... ................ ...........................
#
46.99% sleep libc-2.28.so [.] _dl_addr
29.24% sleep [unknown] [k] 0xffffffffaea00a67
16.45% sleep libc-2.28.so [.] __GI__IO_un_link.part.1
5.92% sleep ld-2.28.so [.] _dl_setup_hash
1.40% sleep libc-2.28.so [.] __nanosleep
0.00% sleep [unknown] [k] 0xffffffffaea00163
#
# (Tip: To see callchains in a more compact form: perf report -g folded)
#
$
Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/304b0a59-942c-3fe1-da02-aa749f87108b@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-18 17:45:11 +00:00
|
|
|
if (zstd_init(&(session->zstd_data), 0) < 0)
|
|
|
|
pr_warning("Decompression initialization failed. Reported data may be incomplete.\n");
|
|
|
|
|
2014-06-05 09:00:20 +00:00
|
|
|
if (report.queue_size) {
|
|
|
|
ordered_events__set_alloc_size(&session->ordered_events,
|
|
|
|
report.queue_size);
|
|
|
|
}
|
|
|
|
|
2015-04-24 19:29:45 +00:00
|
|
|
session->itrace_synth_opts = &itrace_synth_opts;
|
|
|
|
|
2012-03-08 22:47:47 +00:00
|
|
|
report.session = session;
|
|
|
|
|
|
|
|
has_br_stack = perf_header__has_feat(&session->header,
|
|
|
|
HEADER_BRANCH_STACK);
|
2020-06-17 12:24:21 +00:00
|
|
|
if (evlist__combined_sample_type(session->evlist) & PERF_SAMPLE_STACK_USER)
|
2019-08-09 15:31:28 +00:00
|
|
|
has_br_stack = false;
|
2011-12-07 09:02:54 +00:00
|
|
|
|
2018-03-14 09:22:05 +00:00
|
|
|
setup_forced_leader(&report, session->evlist);
|
2018-02-09 09:27:34 +00:00
|
|
|
|
2021-07-06 15:17:01 +00:00
|
|
|
if (symbol_conf.group_sort_idx && !session->evlist->core.nr_groups) {
|
perf report: Allow specifying event to be used as sort key in --group output
When performing "perf report --group", it shows the event group
information together. By default, the output is sorted by the first
event in group.
It would be nice for user to select any event for sorting. This patch
introduces a new option "--group-sort-idx" to sort the output by the
event at the index n in event group.
For example,
Before:
# perf report --group --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 12K of events 'cpu/instructions,period=2000003/, cpu/cpu-cycles,period=200003/, BR_MISP_RETIRED.ALL_BRANCHES:pp, cpu/event=0xc0,umask=1,cmask=1,
# Event count (approx.): 6451235635
#
# Overhead Command Shared Object Symbol
# ................................ ......... ....................... ...................................
#
92.19% 98.68% 0.00% 93.30% mgen mgen [.] LOOP1
3.12% 0.29% 0.00% 0.16% gsd-color libglib-2.0.so.0.5600.4 [.] 0x0000000000049515
1.56% 0.03% 0.00% 0.04% gsd-color libglib-2.0.so.0.5600.4 [.] 0x00000000000494b7
1.56% 0.01% 0.00% 0.00% gsd-color libglib-2.0.so.0.5600.4 [.] 0x00000000000494ce
1.56% 0.00% 0.00% 0.00% mgen [kernel.kallsyms] [k] task_tick_fair
0.00% 0.15% 0.00% 0.04% perf [kernel.kallsyms] [k] smp_call_function_single
0.00% 0.13% 0.00% 6.08% swapper [kernel.kallsyms] [k] intel_idle
0.00% 0.03% 0.00% 0.00% gsd-color libglib-2.0.so.0.5600.4 [.] g_main_context_check
0.00% 0.03% 0.00% 0.00% swapper [kernel.kallsyms] [k] apic_timer_interrupt
...
After:
# perf report --group --stdio --group-sort-idx 3
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 12K of events 'cpu/instructions,period=2000003/, cpu/cpu-cycles,period=200003/, BR_MISP_RETIRED.ALL_BRANCHES:pp, cpu/event=0xc0,umask=1,cmask=1,
# Event count (approx.): 6451235635
#
# Overhead Command Shared Object Symbol
# ................................ ......... ....................... ...................................
#
92.19% 98.68% 0.00% 93.30% mgen mgen [.] LOOP1
0.00% 0.13% 0.00% 6.08% swapper [kernel.kallsyms] [k] intel_idle
3.12% 0.29% 0.00% 0.16% gsd-color libglib-2.0.so.0.5600.4 [.] 0x0000000000049515
0.00% 0.00% 0.00% 0.06% swapper [kernel.kallsyms] [k] hrtimer_start_range_ns
1.56% 0.03% 0.00% 0.04% gsd-color libglib-2.0.so.0.5600.4 [.] 0x00000000000494b7
0.00% 0.15% 0.00% 0.04% perf [kernel.kallsyms] [k] smp_call_function_single
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] update_curr
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] apic_timer_interrupt
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] native_apic_msr_eoi_write
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] __update_load_avg_se
0.00% 0.00% 0.00% 0.02% mgen [kernel.kallsyms] [k] scheduler_tick
Now the output is sorted by the fourth event in group.
v7:
---
Rebase to latest perf/core, no other change.
v4:
---
1. Update Documentation/perf-report.txt to mention
'--group-sort-idx' support multiple groups with different
amount of events and it should be used on grouped events.
2. Update __hpp__group_sort_idx(), just return when the
idx is out of limit.
3. Return failure on symbol_conf.group_sort_idx && !session->evlist->nr_groups.
So now we don't need to use together with --group.
v3:
---
Refine the code in __hpp__group_sort_idx().
Before:
for (i = 1; i < nr_members; i++) {
if (i == idx) {
ret = field_cmp(fields_a[i], fields_b[i]);
if (ret)
goto out;
}
}
After:
if (idx >= 1 && idx < nr_members) {
ret = field_cmp(fields_a[idx], fields_b[idx]);
if (ret)
goto out;
}
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200220013616.19916-2-yao.jin@linux.intel.com
[ Renamed pair_fields_alloc() to hist_entry__new_pair() and combined decl + assignment of vars ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-02-20 01:36:14 +00:00
|
|
|
parse_options_usage(NULL, options, "group-sort-idx", 0);
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2020-04-29 15:07:46 +00:00
|
|
|
if (itrace_synth_opts.last_branch || itrace_synth_opts.add_last_branch)
|
2015-09-25 13:15:41 +00:00
|
|
|
has_br_stack = true;
|
|
|
|
|
2016-10-31 01:19:50 +00:00
|
|
|
if (has_br_stack && branch_call_mode)
|
|
|
|
symbol_conf.show_branchflag_count = true;
|
|
|
|
|
2017-07-18 12:13:14 +00:00
|
|
|
memset(&report.brtype_stat, 0, sizeof(struct branch_type_stat));
|
|
|
|
|
2014-11-13 02:05:22 +00:00
|
|
|
/*
|
|
|
|
* Branch mode is a tristate:
|
|
|
|
* -1 means default, so decide based on the file having branch data.
|
|
|
|
* 0/1 means the user chose a mode.
|
|
|
|
*/
|
|
|
|
if (((branch_mode == -1 && has_br_stack) || branch_mode == 1) &&
|
2015-02-15 02:33:37 +00:00
|
|
|
!branch_call_mode) {
|
2013-04-01 11:35:20 +00:00
|
|
|
sort__mode = SORT_MODE__BRANCH;
|
2013-10-30 08:05:55 +00:00
|
|
|
symbol_conf.cumulate_callchain = false;
|
|
|
|
}
|
2014-11-13 02:05:22 +00:00
|
|
|
if (branch_call_mode) {
|
2014-11-18 01:58:54 +00:00
|
|
|
callchain_param.key = CCKEY_ADDRESS;
|
2020-04-26 12:38:03 +00:00
|
|
|
callchain_param.branch_callstack = true;
|
2014-11-13 02:05:22 +00:00
|
|
|
symbol_conf.use_callchain = true;
|
|
|
|
callchain_register_param(&callchain_param);
|
|
|
|
if (sort_order == NULL)
|
|
|
|
sort_order = "srcline,symbol,dso";
|
|
|
|
}
|
2012-03-08 22:47:47 +00:00
|
|
|
|
2013-01-24 15:10:36 +00:00
|
|
|
if (report.mem_mode) {
|
2013-04-01 11:35:20 +00:00
|
|
|
if (sort__mode == SORT_MODE__BRANCH) {
|
2013-12-20 05:11:12 +00:00
|
|
|
pr_err("branch and mem mode incompatible\n");
|
2013-01-24 15:10:36 +00:00
|
|
|
goto error;
|
|
|
|
}
|
2013-04-03 12:26:11 +00:00
|
|
|
sort__mode = SORT_MODE__MEMORY;
|
2013-10-30 08:05:55 +00:00
|
|
|
symbol_conf.cumulate_callchain = false;
|
2013-01-24 15:10:36 +00:00
|
|
|
}
|
2012-03-08 22:47:48 +00:00
|
|
|
|
2016-02-24 15:13:48 +00:00
|
|
|
if (symbol_conf.report_hierarchy) {
|
|
|
|
/* disable incompatible options */
|
|
|
|
symbol_conf.cumulate_callchain = false;
|
|
|
|
|
|
|
|
if (field_order) {
|
|
|
|
pr_err("Error: --hierarchy and --fields options cannot be used together\n");
|
|
|
|
parse_options_usage(report_usage, options, "F", 1);
|
|
|
|
parse_options_usage(NULL, options, "hierarchy", 0);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2016-05-03 11:54:42 +00:00
|
|
|
perf_hpp_list.need_collapse = true;
|
2016-02-24 15:13:48 +00:00
|
|
|
}
|
|
|
|
|
2017-12-04 16:02:44 +00:00
|
|
|
if (report.use_stdio)
|
|
|
|
use_browser = 0;
|
2022-01-23 19:18:48 +00:00
|
|
|
#ifdef HAVE_SLANG_SUPPORT
|
2017-12-04 16:02:44 +00:00
|
|
|
else if (report.use_tui)
|
|
|
|
use_browser = 1;
|
2022-01-23 19:18:48 +00:00
|
|
|
#endif
|
2022-07-07 20:38:36 +00:00
|
|
|
#ifdef HAVE_GTK2_SUPPORT
|
2017-12-04 16:02:44 +00:00
|
|
|
else if (report.use_gtk)
|
|
|
|
use_browser = 2;
|
2022-07-07 20:38:36 +00:00
|
|
|
#endif
|
2017-12-04 16:02:44 +00:00
|
|
|
|
2015-05-09 15:19:43 +00:00
|
|
|
/* Force tty output for header output and per-thread stat. */
|
|
|
|
if (report.header || report.header_only || report.show_threads)
|
2013-12-09 10:02:49 +00:00
|
|
|
use_browser = 0;
|
2017-07-18 04:25:47 +00:00
|
|
|
if (report.header || report.header_only)
|
|
|
|
report.tool.show_feat_hdr = SHOW_FEAT_HEADER;
|
|
|
|
if (report.show_full_info)
|
|
|
|
report.tool.show_feat_hdr = SHOW_FEAT_HEADER_FULL_INFO;
|
2018-01-07 16:03:56 +00:00
|
|
|
if (report.stats_mode || report.tasks_mode)
|
2018-01-07 16:03:55 +00:00
|
|
|
use_browser = 0;
|
2018-01-07 16:03:56 +00:00
|
|
|
if (report.stats_mode && report.tasks_mode) {
|
2018-01-09 18:25:03 +00:00
|
|
|
pr_err("Error: --tasks and --mmaps can't be used together with --stats\n");
|
2018-01-07 16:03:56 +00:00
|
|
|
goto error;
|
|
|
|
}
|
2013-12-09 10:02:49 +00:00
|
|
|
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
if (report.total_cycles_mode) {
|
|
|
|
if (sort__mode != SORT_MODE__BRANCH)
|
|
|
|
report.total_cycles_mode = false;
|
2019-11-07 07:47:19 +00:00
|
|
|
else
|
2019-11-18 14:08:49 +00:00
|
|
|
sort_order = NULL;
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
}
|
|
|
|
|
2013-11-01 07:33:12 +00:00
|
|
|
if (strcmp(input_name, "-") != 0)
|
|
|
|
setup_browser(true);
|
2014-04-16 02:04:51 +00:00
|
|
|
else
|
2013-11-01 07:33:12 +00:00
|
|
|
use_browser = 0;
|
|
|
|
|
2018-11-30 13:54:56 +00:00
|
|
|
if (sort_order && strstr(sort_order, "ipc")) {
|
|
|
|
parse_options_usage(report_usage, options, "s", 1);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sort_order && strstr(sort_order, "symbol")) {
|
|
|
|
if (sort__mode == SORT_MODE__BRANCH) {
|
|
|
|
snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
|
|
|
|
sort_order, "ipc_lbr");
|
|
|
|
report.symbol_ipc = true;
|
|
|
|
} else {
|
|
|
|
snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
|
|
|
|
sort_order, "ipc_null");
|
|
|
|
}
|
|
|
|
|
|
|
|
sort_order = sort_tmp;
|
|
|
|
}
|
|
|
|
|
2020-02-20 01:36:15 +00:00
|
|
|
if ((last_key != K_SWITCH_INPUT_DATA && last_key != K_RELOAD) &&
|
2019-12-20 01:37:19 +00:00
|
|
|
(setup_sorting(session->evlist) < 0)) {
|
2016-01-18 09:24:06 +00:00
|
|
|
if (sort_order)
|
|
|
|
parse_options_usage(report_usage, options, "s", 1);
|
|
|
|
if (field_order)
|
|
|
|
parse_options_usage(sort_order ? NULL : report_usage,
|
|
|
|
options, "F", 1);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2017-02-17 08:17:39 +00:00
|
|
|
if ((report.header || report.header_only) && !quiet) {
|
2013-12-09 10:02:49 +00:00
|
|
|
perf_session__fprintf_info(session, stdout,
|
|
|
|
report.show_full_info);
|
2015-06-30 08:15:24 +00:00
|
|
|
if (report.header_only) {
|
perf report: Support --header-only for pipe mode
The --header-only checks file header and prints the feature data. But
as pipe mode doesn't have it in the header it prints almost nothing.
Change it to process first few records until it founds HEADER_FEATURE.
Before:
$ perf record -o- true | perf report -i- --header-only
# ========
# captured on : Thu Dec 10 14:34:59 2020
# header version : 1
# data offset : 0
# data size : 0
# feat offset : 0
# ========
#
After:
$ perf record -o- true | perf report -i- --header-only
# ========
# captured on : Thu Dec 10 14:49:11 2020
# header version : 1
# data offset : 0
# data size : 0
# feat offset : 0
# ========
#
# hostname : balhae
# os release : 5.7.17-1xxx
# perf version : 5.10.rc6.gdb0ea13cc741
# arch : x86_64
# nrcpus online : 8
# nrcpus avail : 8
# cpudesc : Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz
# cpuid : GenuineIntel,6,142,12
# total memory : 16158916 kB
# cmdline : perf record -o- true
# event : name = cycles, , id = { 81, 82, 83, 84, 85, 86, 87, 88 }, size = 120, ...
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: intel_pt = 9, intel_bts = 8, software = 1, power = 20, uprobe = 7, ...
# time of first sample : 0.000000
# time of last sample : 0.000000
# sample duration : 0.000 ms
# MEM_TOPOLOGY info available, use -I to display
# cpu pmu capabilities: branches=32, max_precise=3, pmu_name=skylake
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: https://lore.kernel.org/r/20201210061302.88213-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-10 06:13:01 +00:00
|
|
|
if (data.is_pipe) {
|
|
|
|
/*
|
|
|
|
* we need to process first few records
|
|
|
|
* which contains PERF_RECORD_HEADER_FEATURE.
|
|
|
|
*/
|
|
|
|
perf_session__process_events(session);
|
|
|
|
}
|
2015-06-30 08:15:24 +00:00
|
|
|
ret = 0;
|
|
|
|
goto error;
|
|
|
|
}
|
2018-01-07 16:03:56 +00:00
|
|
|
} else if (use_browser == 0 && !quiet &&
|
|
|
|
!report.stats_mode && !report.tasks_mode) {
|
2013-12-09 10:02:49 +00:00
|
|
|
fputs("# To display the perf.data header info, please use --header/--header-only options.\n#\n",
|
|
|
|
stdout);
|
|
|
|
}
|
|
|
|
|
2010-05-12 02:18:06 +00:00
|
|
|
/*
|
2013-03-28 14:34:10 +00:00
|
|
|
* Only in the TUI browser we are doing integrated annotation,
|
2010-05-12 02:18:06 +00:00
|
|
|
* so don't allocate extra space that won't be used in the stdio
|
|
|
|
* implementation.
|
|
|
|
*/
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
if (ui__has_annotation() || report.symbol_ipc ||
|
|
|
|
report.total_cycles_mode) {
|
2016-08-25 19:09:21 +00:00
|
|
|
ret = symbol__annotation_init();
|
|
|
|
if (ret < 0)
|
|
|
|
goto error;
|
2010-08-05 22:28:27 +00:00
|
|
|
/*
|
|
|
|
* For searching by name on the "Browse map details".
|
|
|
|
* providing it only in verbose mode not to bloat too
|
|
|
|
* much struct symbol.
|
|
|
|
*/
|
2017-02-17 08:17:38 +00:00
|
|
|
if (verbose > 0) {
|
2010-08-05 22:28:27 +00:00
|
|
|
/*
|
|
|
|
* XXX: Need to provide a less kludgy way to ask for
|
|
|
|
* more space per symbol, the u32 is for the index on
|
|
|
|
* the ui browser.
|
|
|
|
* See symbol__browser_index.
|
|
|
|
*/
|
|
|
|
symbol_conf.priv_size += sizeof(u32);
|
|
|
|
symbol_conf.sort_by_name = true;
|
|
|
|
}
|
perf annotate: Make perf config effective
perf default config set by user in [annotate] section is totally ignored
by annotate code. Fix it.
Before:
$ ./perf config
annotate.hide_src_code=true
annotate.show_nr_jumps=true
annotate.show_nr_samples=true
$ ./perf annotate shash
│ unsigned h = 0;
│ movl $0x0,-0xc(%rbp)
│ while (*s)
│ ↓ jmp 44
│ h = 65599 * h + *s++;
11.33 │24: mov -0xc(%rbp),%eax
43.50 │ imul $0x1003f,%eax,%ecx
│ mov -0x18(%rbp),%rax
After:
│ movl $0x0,-0xc(%rbp)
│ ↓ jmp 44
1 │1 24: mov -0xc(%rbp),%eax
4 │ imul $0x1003f,%eax,%ecx
│ mov -0x18(%rbp),%rax
Note that we have removed show_nr_samples and show_total_period from
annotation_options because they are not used. Instead of them we use
symbol_conf.show_nr_samples and symbol_conf.show_total_period.
Committer testing:
Using 'perf annotate --stdio2' to use the TUI rendering but emitting the output to stdio:
# perf config
#
# perf config annotate.hide_src_code=true
# perf config
annotate.hide_src_code=true
#
# perf config annotate.show_nr_jumps=true
# perf config annotate.show_nr_samples=true
# perf config
annotate.hide_src_code=true
annotate.show_nr_jumps=true
annotate.show_nr_samples=true
#
#
Before:
# perf annotate --stdio2 ObjectInstance::weak_pointer_was_finalized
Samples: 1 of event 'cycles', 4000 Hz, Event count (approx.): 830873, [percent: local period]
ObjectInstance::weak_pointer_was_finalized() /usr/lib64/libgjs.so.0.0.0
Percent
00000000000609f0 <ObjectInstance::weak_pointer_was_finalized()@@Base>:
endbr64
cmpq $0x0,0x20(%rdi)
↓ je 10
xor %eax,%eax
← retq
xchg %ax,%ax
100.00 10: push %rbp
cmpq $0x0,0x18(%rdi)
mov %rdi,%rbp
↓ jne 20
1b: xor %eax,%eax
pop %rbp
← retq
nop
20: lea 0x18(%rdi),%rdi
→ callq JS_UpdateWeakPointerAfterGC(JS::Heap<JSObject*
cmpq $0x0,0x18(%rbp)
↑ jne 1b
mov %rbp,%rdi
→ callq ObjectBase::jsobj_addr() const@plt
mov $0x1,%eax
pop %rbp
← retq
#
After:
# perf annotate --stdio2 ObjectInstance::weak_pointer_was_finalized 2> /dev/null
Samples: 1 of event 'cycles', 4000 Hz, Event count (approx.): 830873, [percent: local period]
ObjectInstance::weak_pointer_was_finalized() /usr/lib64/libgjs.so.0.0.0
Samples endbr64
cmpq $0x0,0x20(%rdi)
↓ je 10
xor %eax,%eax
← retq
xchg %ax,%ax
1 1 10: push %rbp
cmpq $0x0,0x18(%rdi)
mov %rdi,%rbp
↓ jne 20
1 1b: xor %eax,%eax
pop %rbp
← retq
nop
1 20: lea 0x18(%rdi),%rdi
→ callq JS_UpdateWeakPointerAfterGC(JS::Heap<JSObject*
cmpq $0x0,0x18(%rbp)
↑ jne 1b
mov %rbp,%rdi
→ callq ObjectBase::jsobj_addr() const@plt
mov $0x1,%eax
pop %rbp
← retq
#
# perf config annotate.show_nr_jumps
annotate.show_nr_jumps=true
# perf config annotate.show_nr_jumps=false
# perf config annotate.show_nr_jumps
annotate.show_nr_jumps=false
#
# perf annotate --stdio2 ObjectInstance::weak_pointer_was_finalized 2> /dev/null
Samples: 1 of event 'cycles', 4000 Hz, Event count (approx.): 830873, [percent: local period]
ObjectInstance::weak_pointer_was_finalized() /usr/lib64/libgjs.so.0.0.0
Samples endbr64
cmpq $0x0,0x20(%rdi)
↓ je 10
xor %eax,%eax
← retq
xchg %ax,%ax
1 10: push %rbp
cmpq $0x0,0x18(%rdi)
mov %rdi,%rbp
↓ jne 20
1b: xor %eax,%eax
pop %rbp
← retq
nop
20: lea 0x18(%rdi),%rdi
→ callq JS_UpdateWeakPointerAfterGC(JS::Heap<JSObject*
cmpq $0x0,0x18(%rbp)
↑ jne 1b
mov %rbp,%rdi
→ callq ObjectBase::jsobj_addr() const@plt
mov $0x1,%eax
pop %rbp
← retq
#
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Changbin Du <changbin.du@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Song Liu <songliubraving@fb.com>
Cc: Taeung Song <treeze.taeung@gmail.com>
Cc: Thomas Richter <tmricht@linux.ibm.com>
Cc: Yisheng Xie <xieyisheng1@huawei.com>
Link: http://lore.kernel.org/lkml/20200213064306.160480-6-ravi.bangoria@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-02-13 06:43:03 +00:00
|
|
|
annotation_config__init(&report.annotation_opts);
|
2010-08-05 22:28:27 +00:00
|
|
|
}
|
2009-12-15 22:04:40 +00:00
|
|
|
|
2014-08-12 06:40:45 +00:00
|
|
|
if (symbol__init(&session->header.env) < 0)
|
2012-03-08 22:47:47 +00:00
|
|
|
goto error;
|
2009-05-26 07:17:18 +00:00
|
|
|
|
perf time-utils: Refactor time range parsing code
Jiri points out that we don't need any time checking and time string
parsing if the --time option is not set. That makes sense.
This patch refactors the time range parsing code, move the duplicated
code from perf report and perf script to time_utils and check if --time
option is set before parsing the time string. This patch is no logic
change expected. So the usage of --time is same as before.
For example:
Select the first and second 10% time slices:
perf report --time 10%/1,10%/2
perf script --time 10%/1,10%/2
Select the slices from 0% to 10% and from 30% to 40%:
perf report --time 0%-10%,30%-40%
perf script --time 0%-10%,30%-40%
Select the time slices from timestamp 3971 to 3973
perf report --time 3971,3973
perf script --time 3971,3973
Committer testing:
Using the above examples, check before and after to see if it remains
the same:
$ perf record -F 10000 -- find . -name "*.[ch]" -exec cat {} + > /dev/null
[ perf record: Woken up 3 times to write data ]
[ perf record: Captured and wrote 1.626 MB perf.data (42392 samples) ]
$
$ perf report --time 10%/1,10%/2 > /tmp/report.before.1
$ perf script --time 10%/1,10%/2 > /tmp/script.before.1
$ perf report --time 0%-10%,30%-40% > /tmp/report.before.2
$ perf script --time 0%-10%,30%-40% > /tmp/script.before.2
$ perf report --time 180457.375844,180457.377717 > /tmp/report.before.3
$ perf script --time 180457.375844,180457.377717 > /tmp/script.before.3
For example, the 3rd test produces this slice:
$ cat /tmp/script.before.3
cat 3147 180457.375844: 2143 cycles:uppp: 7f79362590d9 cfree@GLIBC_2.2.5+0x9 (/usr/lib64/libc-2.28.so)
cat 3147 180457.375986: 2245 cycles:uppp: 558b70f3d86e [unknown] (/usr/bin/cat)
cat 3147 180457.376012: 2164 cycles:uppp: 7f7936257430 _int_malloc+0x8c0 (/usr/lib64/libc-2.28.so)
cat 3147 180457.376140: 2921 cycles:uppp: 558b70f3a554 [unknown] (/usr/bin/cat)
cat 3147 180457.376296: 2844 cycles:uppp: 7f7936258abe malloc+0x4e (/usr/lib64/libc-2.28.so)
cat 3147 180457.376431: 2717 cycles:uppp: 558b70f3b0ca [unknown] (/usr/bin/cat)
cat 3147 180457.376667: 2630 cycles:uppp: 558b70f3d86e [unknown] (/usr/bin/cat)
cat 3147 180457.376795: 2442 cycles:uppp: 7f79362bff55 read+0x15 (/usr/lib64/libc-2.28.so)
cat 3147 180457.376927: 2376 cycles:uppp: ffffffff9aa00163 [unknown] ([unknown])
cat 3147 180457.376954: 2307 cycles:uppp: 7f7936257438 _int_malloc+0x8c8 (/usr/lib64/libc-2.28.so)
cat 3147 180457.377116: 3091 cycles:uppp: 7f7936258a70 malloc+0x0 (/usr/lib64/libc-2.28.so)
cat 3147 180457.377362: 2945 cycles:uppp: 558b70f3a3b0 [unknown] (/usr/bin/cat)
cat 3147 180457.377517: 2727 cycles:uppp: 558b70f3a9aa [unknown] (/usr/bin/cat)
$
Install 'coreutils-debuginfo' to see cat's guts (symbols), but then, the
above chunk translates into this 'perf report' output:
$ cat /tmp/report.before.3
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 13 of event 'cycles:uppp' (time slices: 180457.375844,180457.377717)
# Event count (approx.): 33552
#
# Overhead Command Shared Object Symbol
# ........ ....... ................ ......................
#
17.69% cat libc-2.28.so [.] malloc
14.53% cat cat [.] 0x000000000000586e
13.33% cat libc-2.28.so [.] _int_malloc
8.78% cat cat [.] 0x00000000000023b0
8.71% cat cat [.] 0x0000000000002554
8.13% cat cat [.] 0x00000000000029aa
8.10% cat cat [.] 0x00000000000030ca
7.28% cat libc-2.28.so [.] read
7.08% cat [unknown] [k] 0xffffffff9aa00163
6.39% cat libc-2.28.so [.] cfree@GLIBC_2.2.5
#
# (Tip: Order by the overhead of source file name and line number: perf report -s srcline)
#
$
Now lets see after applying this patch, nothing should change:
$ perf report --time 10%/1,10%/2 > /tmp/report.after.1
$ perf script --time 10%/1,10%/2 > /tmp/script.after.1
$ perf report --time 0%-10%,30%-40% > /tmp/report.after.2
$ perf script --time 0%-10%,30%-40% > /tmp/script.after.2
$ perf report --time 180457.375844,180457.377717 > /tmp/report.after.3
$ perf script --time 180457.375844,180457.377717 > /tmp/script.after.3
$ diff -u /tmp/report.before.1 /tmp/report.after.1
$ diff -u /tmp/script.before.1 /tmp/script.after.1
$ diff -u /tmp/report.before.2 /tmp/report.after.2
--- /tmp/report.before.2 2019-03-01 11:01:53.526094883 -0300
+++ /tmp/report.after.2 2019-03-01 11:09:18.231770467 -0300
@@ -352,5 +352,5 @@
#
-# (Tip: Generate a script for your data: perf script -g <lang>)
+# (Tip: Treat branches as callchains: perf report --branch-history)
#
$ diff -u /tmp/script.before.2 /tmp/script.after.2
$ diff -u /tmp/report.before.3 /tmp/report.after.3
--- /tmp/report.before.3 2019-03-01 11:03:08.890045588 -0300
+++ /tmp/report.after.3 2019-03-01 11:09:40.660224002 -0300
@@ -22,5 +22,5 @@
#
-# (Tip: Order by the overhead of source file name and line number: perf report -s srcline)
+# (Tip: List events using substring match: perf list <keyword>)
#
$ diff -u /tmp/script.before.3 /tmp/script.after.3
$
Cool, just the 'perf report' tips changed, QED.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1551435186-6008-1-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-01 10:13:06 +00:00
|
|
|
if (report.time_str) {
|
|
|
|
ret = perf_time__parse_for_ranges(report.time_str, session,
|
|
|
|
&report.ptime_range,
|
|
|
|
&report.range_size,
|
|
|
|
&report.range_num);
|
|
|
|
if (ret < 0)
|
perf report: Remove the time slices number limitation
Previously it was only allowed to use at most 10 time slices in 'perf
report --time'.
This patch removes this limitation.
For example, following command line is OK (12 time slices)
perf report --stdio --time 1%/1,1%/2,1%/3,1%/4,1%/5,1%/6,1%/7,1%/8,1%/9,1%/10,1%/11,1%/12
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Suggested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1515596433-24653-8-git-send-email-yao.jin@linux.intel.com
[ No need to check for NULL to call free, use zfree ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-10 15:00:32 +00:00
|
|
|
goto error;
|
2019-06-04 13:00:01 +00:00
|
|
|
|
|
|
|
itrace_synth_opts__set_time_range(&itrace_synth_opts,
|
|
|
|
report.ptime_range,
|
|
|
|
report.range_num);
|
perf report: Add option to specify time window of interest
Add option to allow user to control analysis window. e.g., collect data
for time window and analyze a segment of interest within that window.
Committer notes:
Testing it:
Using the perf.data file captured via 'perf kmem record':
# perf report --header-only
# ========
# captured on: Tue Nov 29 16:01:53 2016
# hostname : jouet
# os release : 4.8.8-300.fc25.x86_64
# perf version : 4.9.rc6.g5a6aca
# arch : x86_64
# nrcpus online : 4
# nrcpus avail : 4
# cpudesc : Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
# cpuid : GenuineIntel,6,61,4
# total memory : 20254660 kB
# cmdline : /home/acme/bin/perf kmem record usleep 1
# event : name = kmem:kmalloc, , id = { 931980, 931981, 931982, 931983 }, type = 2, size = 112, config = 0x1b9, { sample_period, sample_freq } = 1, sample_typ
# event : name = kmem:kmalloc_node, , id = { 931984, 931985, 931986, 931987 }, type = 2, size = 112, config = 0x1b7, { sample_period, sample_freq } = 1, sampl
# event : name = kmem:kfree, , id = { 931988, 931989, 931990, 931991 }, type = 2, size = 112, config = 0x1b5, { sample_period, sample_freq } = 1, sample_type
# event : name = kmem:kmem_cache_alloc, , id = { 931992, 931993, 931994, 931995 }, type = 2, size = 112, config = 0x1b8, { sample_period, sample_freq } = 1, s
# event : name = kmem:kmem_cache_alloc_node, , id = { 931996, 931997, 931998, 931999 }, type = 2, size = 112, config = 0x1b6, { sample_period, sample_freq } =
# event : name = kmem:kmem_cache_free, , id = { 932000, 932001, 932002, 932003 }, type = 2, size = 112, config = 0x1b4, { sample_period, sample_freq } = 1, sa
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, intel_pt = 7, intel_bts = 6, uncore_arb = 13, cstate_pkg = 15, breakpoint = 5, uncore_cbox_1 = 12, power = 9, software = 1, uncore_im
# HEADER_CACHE info available, use -I to display
# missing features: HEADER_BRANCH_STACK HEADER_GROUP_DESC HEADER_AUXTRACE HEADER_STAT
# ========
#
# # Looking at just the histogram entries for the first event:
#
# perf report | head -33
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 40 of event 'kmem:kmalloc'
# Event count (approx.): 40
#
# Overhead Trace output
# ........ ...............................................................................................................
#
37.50% call_site=ffffffffb91ad3c7 ptr=0xffff88895fc05000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
10.00% call_site=ffffffffb9258416 ptr=0xffff888a1dc61f00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
7.50% call_site=ffffffffb9258416 ptr=0xffff888a2640ac00 bytes_req=240 bytes_alloc=256 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb92759ba ptr=0xffff888a26776000 bytes_req=4096 bytes_alloc=4096 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9276864 ptr=0xffff8886f6b82600 bytes_req=136 bytes_alloc=192 gfp_flags=GFP_KERNEL|__GFP_ZERO
2.50% call_site=ffffffffb9276903 ptr=0xffff888aefcf0460 bytes_req=32 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c98a00 bytes_req=392 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad0ce ptr=0xffff888756c9ba00 bytes_req=504 bytes_alloc=512 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad301 ptr=0xffff888a31747600 bytes_req=128 bytes_alloc=128 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb92ad511 ptr=0xffff888a9d26a2a0 bytes_req=28 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c11a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c12c0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1540 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c15e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c16e0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff88873e8c1c20 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb936a7fb ptr=0xffff888a9d26a2a0 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931240 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931980 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
2.50% call_site=ffffffffb9373e66 ptr=0xffff8889f1931a00 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_ATOMIC|__GFP_ZERO
#
# # And then limiting using the example for 'perf kmem stat --time' used
# # in the previous changeset committer note we see that there were no
# # kmem:kmalloc in that last part of the file, but there were some
# # kmem:kmem_cache_alloc ones:
#
# perf report --time 20119.782088, --stdio
#
# Total Lost Samples: 0
#
# Samples: 0 of event 'kmem:kmalloc'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kmalloc_node'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 0 of event 'kmem:kfree'
# Event count (approx.): 0
#
# Overhead Trace output
# ........ ............
#
# Samples: 8 of event 'kmem:kmem_cache_alloc'
# Event count (approx.): 8
#
# Overhead Trace output
# ........ ..................................................................................................................
#
75.00% call_site=ffffffffb9333b42 ptr=0xffff888bdf1a39c0 bytes_req=48 bytes_alloc=48 gfp_flags=GFP_NOFS|__GFP_ZERO
12.50% call_site=ffffffffb90ad33a ptr=0xffff8889f071f6e0 bytes_req=160 bytes_alloc=160 gfp_flags=GFP_ATOMIC|__GFP_NOTRACK
12.50% call_site=ffffffffb9287cc1 ptr=0xffff8889b12722d8 bytes_req=104 bytes_alloc=104 gfp_flags=GFP_NOFS|__GFP_ZERO
#
Signed-off-by: David Ahern <dsahern@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480439746-42695-7-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-29 17:15:46 +00:00
|
|
|
}
|
|
|
|
|
perf build: Use libtraceevent from the system
Remove the LIBTRACEEVENT_DYNAMIC and LIBTRACEFS_DYNAMIC make command
line variables.
If libtraceevent isn't installed or NO_LIBTRACEEVENT=1 is passed to the
build, don't compile in libtraceevent and libtracefs support.
This also disables CONFIG_TRACE that controls "perf trace".
CONFIG_LIBTRACEEVENT is used to control enablement in Build/Makefiles,
HAVE_LIBTRACEEVENT is used in C code.
Without HAVE_LIBTRACEEVENT tracepoints are disabled and as such the
commands kmem, kwork, lock, sched and timechart are removed. The
majority of commands continue to work including "perf test".
Committer notes:
Fixed up a tools/perf/util/Build reject and added:
#include <traceevent/event-parse.h>
to tools/perf/util/scripting-engines/trace-event-perl.c.
Committer testing:
$ rpm -qi libtraceevent-devel
Name : libtraceevent-devel
Version : 1.5.3
Release : 2.fc36
Architecture: x86_64
Install Date: Mon 25 Jul 2022 03:20:19 PM -03
Group : Unspecified
Size : 27728
License : LGPLv2+ and GPLv2+
Signature : RSA/SHA256, Fri 15 Apr 2022 02:11:58 PM -03, Key ID 999f7cbf38ab71f4
Source RPM : libtraceevent-1.5.3-2.fc36.src.rpm
Build Date : Fri 15 Apr 2022 10:57:01 AM -03
Build Host : buildvm-x86-05.iad2.fedoraproject.org
Packager : Fedora Project
Vendor : Fedora Project
URL : https://git.kernel.org/pub/scm/libs/libtrace/libtraceevent.git/
Bug URL : https://bugz.fedoraproject.org/libtraceevent
Summary : Development headers of libtraceevent
Description :
Development headers of libtraceevent-libs
$
Default build:
$ ldd ~/bin/perf | grep tracee
libtraceevent.so.1 => /lib64/libtraceevent.so.1 (0x00007f1dcaf8f000)
$
# perf trace -e sched:* --max-events 10
0.000 migration/0/17 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, dest_cpu: 1)
0.005 migration/0/17 sched:sched_wake_idle_without_ipi(cpu: 1)
0.011 migration/0/17 sched:sched_switch(prev_comm: "", prev_pid: 17 (migration/0), prev_state: 1, next_comm: "", next_prio: 120)
1.173 :0/0 sched:sched_wakeup(comm: "", pid: 3138 (gnome-terminal-), prio: 120)
1.180 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 3138 (gnome-terminal-), next_prio: 120)
0.156 migration/1/21 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, orig_cpu: 1, dest_cpu: 2)
0.160 migration/1/21 sched:sched_wake_idle_without_ipi(cpu: 2)
0.166 migration/1/21 sched:sched_switch(prev_comm: "", prev_pid: 21 (migration/1), prev_state: 1, next_comm: "", next_prio: 120)
1.183 :0/0 sched:sched_wakeup(comm: "", pid: 1602985 (kworker/u16:0-f), prio: 120, target_cpu: 1)
1.186 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 1602985 (kworker/u16:0-f), next_prio: 120)
#
Had to tweak tools/perf/util/setup.py to make sure the python binding
shared object links with libtraceevent if -DHAVE_LIBTRACEEVENT is
present in CFLAGS.
Building with NO_LIBTRACEEVENT=1 uncovered some more build failures:
- Make building of data-convert-bt.c to CONFIG_LIBTRACEEVENT=y
- perf-$(CONFIG_LIBTRACEEVENT) += scripts/
- bpf_kwork.o needs also to be dependent on CONFIG_LIBTRACEEVENT=y
- The python binding needed some fixups and util/trace-event.c can't be
built and linked with the python binding shared object, so remove it
in tools/perf/util/setup.py and exclude it from the list of
dependencies in the python/perf.so Makefile.perf target.
Building without libtraceevent-devel installed uncovered more build
failures:
- The python binding tools/perf/util/python.c was assuming that
traceevent/parse-events.h was always available, which was the case
when we defaulted to using the in-kernel tools/lib/traceevent/ files,
now we need to enclose it under ifdef HAVE_LIBTRACEEVENT, just like
the other parts of it that deal with tracepoints.
- We have to ifdef the rules in the Build files with
CONFIG_LIBTRACEEVENT=y to build builtin-trace.c and
tools/perf/trace/beauty/ as we only ifdef setting CONFIG_TRACE=y when
setting NO_LIBTRACEEVENT=1 in the make command line, not when we don't
detect libtraceevent-devel installed in the system. Simplification here
to avoid these two ways of disabling builtin-trace.c and not having
CONFIG_TRACE=y when libtraceevent-devel isn't installed is the clean
way.
From Athira:
<quote>
tools/perf/arch/powerpc/util/Build
-perf-y += kvm-stat.o
+perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
</quote>
Then, ditto for arm64 and s390, detected by container cross build tests.
- s/390 uses test__checkevent_tracepoint() that is now only available if
HAVE_LIBTRACEEVENT is defined, enclose the callsite with ifder HAVE_LIBTRACEEVENT.
Also from Athira:
<quote>
With this change, I could successfully compile in these environment:
- Without libtraceevent-devel installed
- With libtraceevent-devel installed
- With “make NO_LIBTRACEEVENT=1”
</quote>
Then, finally rename CONFIG_TRACEEVENT to CONFIG_LIBTRACEEVENT for
consistency with other libraries detected in tools/perf/.
Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: bpf@vger.kernel.org
Link: http://lore.kernel.org/lkml/20221205225940.3079667-3-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-12-05 22:59:39 +00:00
|
|
|
#ifdef HAVE_LIBTRACEEVENT
|
2018-03-08 03:28:50 +00:00
|
|
|
if (session->tevent.pevent &&
|
2018-08-08 18:02:55 +00:00
|
|
|
tep_set_function_resolver(session->tevent.pevent,
|
|
|
|
machine__resolve_kernel_addr,
|
|
|
|
&session->machines.host) < 0) {
|
2018-03-08 03:28:50 +00:00
|
|
|
pr_err("%s: failed to set libtraceevent function resolver\n",
|
|
|
|
__func__);
|
|
|
|
return -1;
|
|
|
|
}
|
perf build: Use libtraceevent from the system
Remove the LIBTRACEEVENT_DYNAMIC and LIBTRACEFS_DYNAMIC make command
line variables.
If libtraceevent isn't installed or NO_LIBTRACEEVENT=1 is passed to the
build, don't compile in libtraceevent and libtracefs support.
This also disables CONFIG_TRACE that controls "perf trace".
CONFIG_LIBTRACEEVENT is used to control enablement in Build/Makefiles,
HAVE_LIBTRACEEVENT is used in C code.
Without HAVE_LIBTRACEEVENT tracepoints are disabled and as such the
commands kmem, kwork, lock, sched and timechart are removed. The
majority of commands continue to work including "perf test".
Committer notes:
Fixed up a tools/perf/util/Build reject and added:
#include <traceevent/event-parse.h>
to tools/perf/util/scripting-engines/trace-event-perl.c.
Committer testing:
$ rpm -qi libtraceevent-devel
Name : libtraceevent-devel
Version : 1.5.3
Release : 2.fc36
Architecture: x86_64
Install Date: Mon 25 Jul 2022 03:20:19 PM -03
Group : Unspecified
Size : 27728
License : LGPLv2+ and GPLv2+
Signature : RSA/SHA256, Fri 15 Apr 2022 02:11:58 PM -03, Key ID 999f7cbf38ab71f4
Source RPM : libtraceevent-1.5.3-2.fc36.src.rpm
Build Date : Fri 15 Apr 2022 10:57:01 AM -03
Build Host : buildvm-x86-05.iad2.fedoraproject.org
Packager : Fedora Project
Vendor : Fedora Project
URL : https://git.kernel.org/pub/scm/libs/libtrace/libtraceevent.git/
Bug URL : https://bugz.fedoraproject.org/libtraceevent
Summary : Development headers of libtraceevent
Description :
Development headers of libtraceevent-libs
$
Default build:
$ ldd ~/bin/perf | grep tracee
libtraceevent.so.1 => /lib64/libtraceevent.so.1 (0x00007f1dcaf8f000)
$
# perf trace -e sched:* --max-events 10
0.000 migration/0/17 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, dest_cpu: 1)
0.005 migration/0/17 sched:sched_wake_idle_without_ipi(cpu: 1)
0.011 migration/0/17 sched:sched_switch(prev_comm: "", prev_pid: 17 (migration/0), prev_state: 1, next_comm: "", next_prio: 120)
1.173 :0/0 sched:sched_wakeup(comm: "", pid: 3138 (gnome-terminal-), prio: 120)
1.180 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 3138 (gnome-terminal-), next_prio: 120)
0.156 migration/1/21 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, orig_cpu: 1, dest_cpu: 2)
0.160 migration/1/21 sched:sched_wake_idle_without_ipi(cpu: 2)
0.166 migration/1/21 sched:sched_switch(prev_comm: "", prev_pid: 21 (migration/1), prev_state: 1, next_comm: "", next_prio: 120)
1.183 :0/0 sched:sched_wakeup(comm: "", pid: 1602985 (kworker/u16:0-f), prio: 120, target_cpu: 1)
1.186 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 1602985 (kworker/u16:0-f), next_prio: 120)
#
Had to tweak tools/perf/util/setup.py to make sure the python binding
shared object links with libtraceevent if -DHAVE_LIBTRACEEVENT is
present in CFLAGS.
Building with NO_LIBTRACEEVENT=1 uncovered some more build failures:
- Make building of data-convert-bt.c to CONFIG_LIBTRACEEVENT=y
- perf-$(CONFIG_LIBTRACEEVENT) += scripts/
- bpf_kwork.o needs also to be dependent on CONFIG_LIBTRACEEVENT=y
- The python binding needed some fixups and util/trace-event.c can't be
built and linked with the python binding shared object, so remove it
in tools/perf/util/setup.py and exclude it from the list of
dependencies in the python/perf.so Makefile.perf target.
Building without libtraceevent-devel installed uncovered more build
failures:
- The python binding tools/perf/util/python.c was assuming that
traceevent/parse-events.h was always available, which was the case
when we defaulted to using the in-kernel tools/lib/traceevent/ files,
now we need to enclose it under ifdef HAVE_LIBTRACEEVENT, just like
the other parts of it that deal with tracepoints.
- We have to ifdef the rules in the Build files with
CONFIG_LIBTRACEEVENT=y to build builtin-trace.c and
tools/perf/trace/beauty/ as we only ifdef setting CONFIG_TRACE=y when
setting NO_LIBTRACEEVENT=1 in the make command line, not when we don't
detect libtraceevent-devel installed in the system. Simplification here
to avoid these two ways of disabling builtin-trace.c and not having
CONFIG_TRACE=y when libtraceevent-devel isn't installed is the clean
way.
From Athira:
<quote>
tools/perf/arch/powerpc/util/Build
-perf-y += kvm-stat.o
+perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
</quote>
Then, ditto for arm64 and s390, detected by container cross build tests.
- s/390 uses test__checkevent_tracepoint() that is now only available if
HAVE_LIBTRACEEVENT is defined, enclose the callsite with ifder HAVE_LIBTRACEEVENT.
Also from Athira:
<quote>
With this change, I could successfully compile in these environment:
- Without libtraceevent-devel installed
- With libtraceevent-devel installed
- With “make NO_LIBTRACEEVENT=1”
</quote>
Then, finally rename CONFIG_TRACEEVENT to CONFIG_LIBTRACEEVENT for
consistency with other libraries detected in tools/perf/.
Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: bpf@vger.kernel.org
Link: http://lore.kernel.org/lkml/20221205225940.3079667-3-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-12-05 22:59:39 +00:00
|
|
|
#endif
|
2013-04-03 12:26:19 +00:00
|
|
|
sort__setup_elide(stdout);
|
2009-06-30 22:01:20 +00:00
|
|
|
|
2012-03-08 22:47:47 +00:00
|
|
|
ret = __cmd_report(&report);
|
2020-02-20 01:36:15 +00:00
|
|
|
if (ret == K_SWITCH_INPUT_DATA || ret == K_RELOAD) {
|
2013-02-03 06:38:21 +00:00
|
|
|
perf_session__delete(session);
|
2019-12-20 01:37:19 +00:00
|
|
|
last_key = K_SWITCH_INPUT_DATA;
|
2013-02-03 06:38:21 +00:00
|
|
|
goto repeat;
|
|
|
|
} else
|
|
|
|
ret = 0;
|
|
|
|
|
2012-03-08 22:47:47 +00:00
|
|
|
error:
|
2019-06-04 13:00:01 +00:00
|
|
|
if (report.ptime_range) {
|
|
|
|
itrace_synth_opts__clear_time_range(&itrace_synth_opts);
|
perf time-utils: Refactor time range parsing code
Jiri points out that we don't need any time checking and time string
parsing if the --time option is not set. That makes sense.
This patch refactors the time range parsing code, move the duplicated
code from perf report and perf script to time_utils and check if --time
option is set before parsing the time string. This patch is no logic
change expected. So the usage of --time is same as before.
For example:
Select the first and second 10% time slices:
perf report --time 10%/1,10%/2
perf script --time 10%/1,10%/2
Select the slices from 0% to 10% and from 30% to 40%:
perf report --time 0%-10%,30%-40%
perf script --time 0%-10%,30%-40%
Select the time slices from timestamp 3971 to 3973
perf report --time 3971,3973
perf script --time 3971,3973
Committer testing:
Using the above examples, check before and after to see if it remains
the same:
$ perf record -F 10000 -- find . -name "*.[ch]" -exec cat {} + > /dev/null
[ perf record: Woken up 3 times to write data ]
[ perf record: Captured and wrote 1.626 MB perf.data (42392 samples) ]
$
$ perf report --time 10%/1,10%/2 > /tmp/report.before.1
$ perf script --time 10%/1,10%/2 > /tmp/script.before.1
$ perf report --time 0%-10%,30%-40% > /tmp/report.before.2
$ perf script --time 0%-10%,30%-40% > /tmp/script.before.2
$ perf report --time 180457.375844,180457.377717 > /tmp/report.before.3
$ perf script --time 180457.375844,180457.377717 > /tmp/script.before.3
For example, the 3rd test produces this slice:
$ cat /tmp/script.before.3
cat 3147 180457.375844: 2143 cycles:uppp: 7f79362590d9 cfree@GLIBC_2.2.5+0x9 (/usr/lib64/libc-2.28.so)
cat 3147 180457.375986: 2245 cycles:uppp: 558b70f3d86e [unknown] (/usr/bin/cat)
cat 3147 180457.376012: 2164 cycles:uppp: 7f7936257430 _int_malloc+0x8c0 (/usr/lib64/libc-2.28.so)
cat 3147 180457.376140: 2921 cycles:uppp: 558b70f3a554 [unknown] (/usr/bin/cat)
cat 3147 180457.376296: 2844 cycles:uppp: 7f7936258abe malloc+0x4e (/usr/lib64/libc-2.28.so)
cat 3147 180457.376431: 2717 cycles:uppp: 558b70f3b0ca [unknown] (/usr/bin/cat)
cat 3147 180457.376667: 2630 cycles:uppp: 558b70f3d86e [unknown] (/usr/bin/cat)
cat 3147 180457.376795: 2442 cycles:uppp: 7f79362bff55 read+0x15 (/usr/lib64/libc-2.28.so)
cat 3147 180457.376927: 2376 cycles:uppp: ffffffff9aa00163 [unknown] ([unknown])
cat 3147 180457.376954: 2307 cycles:uppp: 7f7936257438 _int_malloc+0x8c8 (/usr/lib64/libc-2.28.so)
cat 3147 180457.377116: 3091 cycles:uppp: 7f7936258a70 malloc+0x0 (/usr/lib64/libc-2.28.so)
cat 3147 180457.377362: 2945 cycles:uppp: 558b70f3a3b0 [unknown] (/usr/bin/cat)
cat 3147 180457.377517: 2727 cycles:uppp: 558b70f3a9aa [unknown] (/usr/bin/cat)
$
Install 'coreutils-debuginfo' to see cat's guts (symbols), but then, the
above chunk translates into this 'perf report' output:
$ cat /tmp/report.before.3
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 13 of event 'cycles:uppp' (time slices: 180457.375844,180457.377717)
# Event count (approx.): 33552
#
# Overhead Command Shared Object Symbol
# ........ ....... ................ ......................
#
17.69% cat libc-2.28.so [.] malloc
14.53% cat cat [.] 0x000000000000586e
13.33% cat libc-2.28.so [.] _int_malloc
8.78% cat cat [.] 0x00000000000023b0
8.71% cat cat [.] 0x0000000000002554
8.13% cat cat [.] 0x00000000000029aa
8.10% cat cat [.] 0x00000000000030ca
7.28% cat libc-2.28.so [.] read
7.08% cat [unknown] [k] 0xffffffff9aa00163
6.39% cat libc-2.28.so [.] cfree@GLIBC_2.2.5
#
# (Tip: Order by the overhead of source file name and line number: perf report -s srcline)
#
$
Now lets see after applying this patch, nothing should change:
$ perf report --time 10%/1,10%/2 > /tmp/report.after.1
$ perf script --time 10%/1,10%/2 > /tmp/script.after.1
$ perf report --time 0%-10%,30%-40% > /tmp/report.after.2
$ perf script --time 0%-10%,30%-40% > /tmp/script.after.2
$ perf report --time 180457.375844,180457.377717 > /tmp/report.after.3
$ perf script --time 180457.375844,180457.377717 > /tmp/script.after.3
$ diff -u /tmp/report.before.1 /tmp/report.after.1
$ diff -u /tmp/script.before.1 /tmp/script.after.1
$ diff -u /tmp/report.before.2 /tmp/report.after.2
--- /tmp/report.before.2 2019-03-01 11:01:53.526094883 -0300
+++ /tmp/report.after.2 2019-03-01 11:09:18.231770467 -0300
@@ -352,5 +352,5 @@
#
-# (Tip: Generate a script for your data: perf script -g <lang>)
+# (Tip: Treat branches as callchains: perf report --branch-history)
#
$ diff -u /tmp/script.before.2 /tmp/script.after.2
$ diff -u /tmp/report.before.3 /tmp/report.after.3
--- /tmp/report.before.3 2019-03-01 11:03:08.890045588 -0300
+++ /tmp/report.after.3 2019-03-01 11:09:40.660224002 -0300
@@ -22,5 +22,5 @@
#
-# (Tip: Order by the overhead of source file name and line number: perf report -s srcline)
+# (Tip: List events using substring match: perf list <keyword>)
#
$ diff -u /tmp/script.before.3 /tmp/script.after.3
$
Cool, just the 'perf report' tips changed, QED.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1551435186-6008-1-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-01 10:13:06 +00:00
|
|
|
zfree(&report.ptime_range);
|
2019-06-04 13:00:01 +00:00
|
|
|
}
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
|
2020-02-02 14:16:54 +00:00
|
|
|
if (report.block_reports) {
|
|
|
|
block_info__free_report(report.block_reports,
|
|
|
|
report.nr_block_reports);
|
|
|
|
report.block_reports = NULL;
|
|
|
|
}
|
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled
cycles percent per block. This is useful to concentrate on the globally
hottest blocks.
This patch implements a new option "--total-cycles" which sorts all
blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent:
percent = block sampled cycles aggregation / total sampled cycles
Note that, this patch only supports "--stdio" mode.
For example,
# perf record -b ./div
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 2M of event 'cycles'
# Event count (approx.): 2753248
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ................................................ .................
#
26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div
15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so
5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div
4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so
4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div
3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div
3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so
3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so
2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so
2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so
2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so
2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so
2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div
1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so
1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div
1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so
0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so
0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms]
0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms]
0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms]
0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms]
0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms]
0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms]
0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms]
0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms]
0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms]
0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms]
Committer testing:
This should provide material for hours of endless joy, both from looking
for suspicious things in the implementation of this patch, such as the
top one:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
As well from things that look legit:
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
:-)
Very short system wide taken branches session:
# perf record -h -b
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-b, --branch-any sample any taken branches
#
# perf record -b
^C[ perf record: Woken up 595 times to write data ]
[ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ]
#
# perf evlist -v
cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY
#
# perf report --total-cycles --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6M of event 'cycles'
# Event count (approx.): 6299936
#
# Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object
# ............... .............. ........... .......... ...................................................................... ....................
#
2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux]
1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so
0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux]
0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux]
0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux]
0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux]
0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux]
0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux]
0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux]
0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux]
0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux]
0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux]
0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux]
0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux]
0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux]
0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux]
0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux]
0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux]
0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux]
0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux]
0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux]
0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux]
0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux]
0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux]
0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux]
0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux]
0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux]
-----------------------------------------------------------
v7:
---
Use use_browser in report__browse_block_hists for supporting
stdio and potential tui mode.
v6:
---
Create report__browse_block_hists in block-info.c (codes are
moved from builtin-report.c). It's called from
perf_evlist__tty_browse_hists.
v5:
---
1. Move all block functions to block-info.c
2. Move the code of setting ms in block hist_entry to
other patch.
v4:
---
1. Use new option '--total-cycles' to replace
'-s total_cycles' in v3.
2. Move block info collection out of block info
printing.
v3:
---
1. Use common function block_info__process_sym to
process the blocks per symbol.
2. Remove the nasty hack for skipping calculation
of column length
3. Some minor cleanup
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-07 07:47:17 +00:00
|
|
|
|
perf report: Implement perf.data record decompression
zstd_init(, comp_level = 0) initializes decompression part of API only
hat now consists of zstd_decompress_stream() function.
The perf.data PERF_RECORD_COMPRESSED records are decompressed using
zstd_decompress_stream() function into a linked list of mmaped memory
regions of mmap_comp_len size (struct decomp).
After decompression of one COMPRESSED record its content is iterated and
fetched for usual processing. The mmaped memory regions with
decompressed events are kept in the linked list till the tool process
termination.
When dumping raw records (e.g., perf report -D --header) file offsets of
events from compressed records are printed as zero.
Committer notes:
Since now we have support for processing PERF_RECORD_COMPRESSED, we see
none, in raw form, like we saw in the previous patch commiter notes,
they were decompressed into the usual PERF_RECORD_{FORK,MMAP,COMM,etc}
records, we only see the stats for those PERF_RECORD_COMPRESSED events,
and since I used the file generated in the commiter notes for the
previous patch, there they are, 2 compressed records:
$ perf report --header-only | grep cmdline
# cmdline : /home/acme/bin/perf record -z2 sleep 1
$ perf report -D | grep COMPRESS
COMPRESSED events: 2
COMPRESSED events: 0
$ perf report --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 15 of event 'cycles:u'
# Event count (approx.): 962227
#
# Overhead Command Shared Object Symbol
# ........ ....... ................ ...........................
#
46.99% sleep libc-2.28.so [.] _dl_addr
29.24% sleep [unknown] [k] 0xffffffffaea00a67
16.45% sleep libc-2.28.so [.] __GI__IO_un_link.part.1
5.92% sleep ld-2.28.so [.] _dl_setup_hash
1.40% sleep libc-2.28.so [.] __nanosleep
0.00% sleep [unknown] [k] 0xffffffffaea00163
#
# (Tip: To see callchains in a more compact form: perf report -g folded)
#
$
Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/304b0a59-942c-3fe1-da02-aa749f87108b@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-18 17:45:11 +00:00
|
|
|
zstd_fini(&(session->zstd_data));
|
2012-03-08 22:47:47 +00:00
|
|
|
perf_session__delete(session);
|
2021-07-15 16:07:14 +00:00
|
|
|
exit:
|
|
|
|
free(sort_order_help);
|
|
|
|
free(field_order_help);
|
2012-03-08 22:47:47 +00:00
|
|
|
return ret;
|
2009-05-26 07:17:18 +00:00
|
|
|
}
|