linux/tools/perf/builtin-stat.c

3002 lines
73 KiB
C
Raw Normal View History

/*
* builtin-stat.c
*
* Builtin stat command: Give a precise performance counters summary
* overview about any workload, CPU or specific PID.
*
* Sample output:
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
$ perf stat ./hackbench 10
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
Time: 0.118
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
Performance counter stats for './hackbench 10':
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
1708.761321 task-clock # 11.037 CPUs utilized
41,190 context-switches # 0.024 M/sec
6,735 CPU-migrations # 0.004 M/sec
17,318 page-faults # 0.010 M/sec
5,205,202,243 cycles # 3.046 GHz
3,856,436,920 stalled-cycles-frontend # 74.09% frontend cycles idle
1,600,790,871 stalled-cycles-backend # 30.75% backend cycles idle
2,603,501,247 instructions # 0.50 insns per cycle
# 1.48 stalled cycles per insn
484,357,498 branches # 283.455 M/sec
6,388,934 branch-misses # 1.32% of all branches
0.154822978 seconds time elapsed
*
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
* Copyright (C) 2008-2011, Red Hat Inc, Ingo Molnar <mingo@redhat.com>
*
* Improvements and fixes by:
*
* Arjan van de Ven <arjan@linux.intel.com>
* Yanmin Zhang <yanmin.zhang@intel.com>
* Wu Fengguang <fengguang.wu@intel.com>
* Mike Galbraith <efault@gmx.de>
* Paul Mackerras <paulus@samba.org>
* Jaswinder Singh Rajput <jaswinder@kernel.org>
*
* Released under the GPL v2. (and only v2, not any later version)
*/
#include "perf.h"
#include "builtin.h"
#include "util/cgroup.h"
#include "util/util.h"
#include <subcmd/parse-options.h>
#include "util/parse-events.h"
#include "util/pmu.h"
#include "util/event.h"
#include "util/evlist.h"
#include "util/evsel.h"
#include "util/debug.h"
#include "util/drv_configs.h"
#include "util/color.h"
#include "util/stat.h"
#include "util/header.h"
perf tools: Fix sparse CPU numbering related bugs At present, the perf subcommands that do system-wide monitoring (perf stat, perf record and perf top) don't work properly unless the online cpus are numbered 0, 1, ..., N-1. These tools ask for the number of online cpus with sysconf(_SC_NPROCESSORS_ONLN) and then try to create events for cpus 0, 1, ..., N-1. This creates problems for systems where the online cpus are numbered sparsely. For example, a POWER6 system in single-threaded mode (i.e. only running 1 hardware thread per core) will have only even-numbered cpus online. This fixes the problem by reading the /sys/devices/system/cpu/online file to find out which cpus are online. The code that does that is in tools/perf/util/cpumap.[ch], and consists of a read_cpu_map() function that sets up a cpumap[] array and returns the number of online cpus. If /sys/devices/system/cpu/online can't be read or can't be parsed successfully, it falls back to using sysconf to ask how many cpus are online and sets up an identity map in cpumap[]. The perf record, perf stat and perf top code then calls read_cpu_map() in the system-wide monitoring case (instead of sysconf) and uses cpumap[] to get the cpu numbers to pass to perf_event_open. Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: Anton Blanchard <anton@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@infradead.org> LKML-Reference: <20100310093609.GA3959@brick.ozlabs.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-03-10 09:36:09 +00:00
#include "util/cpumap.h"
#include "util/thread.h"
#include "util/thread_map.h"
#include "util/counts.h"
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
#include "util/group.h"
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
#include "util/session.h"
#include "util/tool.h"
#include "util/string2.h"
perf stat: Support JSON metrics in perf stat Add generic support for standalone metrics specified in JSON files to perf stat. A metric is a formula that uses multiple events to compute a higher level result (e.g. IPC). Previously metrics were always tied to an event and automatically enabled with that event. But now change it that we can have standalone metrics. They are in the same JSON data structure as events, but don't have an event name. We also allow to organize the metrics in metric groups, which allows a short cut to select several related metrics at once. Add a new -M / --metrics option to perf stat that adds the metrics or metric groups specified. Add the core code to manage and parse the metric groups. They are collected from the JSON data structures into a separate rblist. When computing shadow values look for metrics in that list. Then they are computed using the existing saved values infrastructure in stat-shadow.c The actual JSON metrics are in a separate pull request. % perf stat -M Summary --metric-only -a sleep 1 Performance counter stats for 'system wide': Instructions CLKS CPU_Utilization GFLOPs SMT_2T_Utilization Kernel_Utilization 317614222.0 1392930775.0 0.0 0.0 0.2 0.1 1.001497549 seconds time elapsed % perf stat -M GFLOPs flops Performance counter stats for 'flops': 3,999,541,471 fp_comp_ops_exe.sse_scalar_single # 1.2 GFLOPs (66.65%) 14 fp_comp_ops_exe.sse_scalar_double (66.65%) 0 fp_comp_ops_exe.sse_packed_double (66.67%) 0 fp_comp_ops_exe.sse_packed_single (66.70%) 0 simd_fp_256.packed_double (66.70%) 0 simd_fp_256.packed_single (66.67%) 0 duration_time 3.238372845 seconds time elapsed v2: Add missing header file v3: Move find_map to pmu.c Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170831194036.30146-7-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:31 +00:00
#include "util/metricgroup.h"
#include "asm/bug.h"
#include <linux/time64.h>
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
#include <api/fs/fs.h>
#include <errno.h>
#include <signal.h>
#include <stdlib.h>
#include <sys/prctl.h>
#include <inttypes.h>
perf stat: add perf stat -B to pretty print large numbers It is hard to read very large numbers so provide an option to perf stat to separate thousands using a separator. The patch leverages the locale support of stdio. You need to set your LC_NUMERIC appropriately, for instance LC_NUMERIC=en_US.UTF8. You need to pass -B to activate this feature. This way existing scripts parsing the output do not need to be changed. Here is an example. $ perf stat noploop 2 noploop for 2 seconds Performance counter stats for 'noploop 2': 1998.347031 task-clock-msecs # 0.998 CPUs 61 context-switches # 0.000 M/sec 0 CPU-migrations # 0.000 M/sec 118 page-faults # 0.000 M/sec 4,138,410,900 cycles # 2070.917 M/sec (scaled from 70.01%) 2,062,650,268 instructions # 0.498 IPC (scaled from 70.01%) 2,057,653,466 branches # 1029.678 M/sec (scaled from 70.01%) 40,267 branch-misses # 0.002 % (scaled from 30.04%) 2,055,961,348 cache-references # 1028.831 M/sec (scaled from 30.03%) 53,725 cache-misses # 0.027 M/sec (scaled from 30.02%) 2.001393933 seconds time elapsed $ perf stat -B noploop 2 noploop for 2 seconds Performance counter stats for 'noploop 2': 1998.297883 task-clock-msecs # 0.998 CPUs 59 context-switches # 0.000 M/sec 0 CPU-migrations # 0.000 M/sec 119 page-faults # 0.000 M/sec 4,131,380,160 cycles # 2067.450 M/sec (scaled from 70.01%) 2,059,096,507 instructions # 0.498 IPC (scaled from 70.01%) 2,054,681,303 branches # 1028.216 M/sec (scaled from 70.01%) 25,650 branch-misses # 0.001 % (scaled from 30.05%) 2,056,283,014 cache-references # 1029.017 M/sec (scaled from 30.03%) 47,097 cache-misses # 0.024 M/sec (scaled from 30.02%) 2.001391016 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <4bf28fe8.914ed80a.01ca.fffff5f5@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-05-18 13:00:01 +00:00
#include <locale.h>
#include <math.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <unistd.h>
#include "sane_ctype.h"
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
#define DEFAULT_SEPARATOR " "
perf stat: clarify unsupported events from uncounted events perf stat continues running even if the event list contains counters that are not supported. The resulting output then contains <not counted> for those events which gets confusing as to which events are supported, but not counted and which are not supported. Before: perf stat -ddd -- sleep 1 Performance counter stats for 'sleep 1': 0.571283 task-clock # 0.001 CPUs utilized 1 context-switches # 0.002 M/sec 0 CPU-migrations # 0.000 M/sec 157 page-faults # 0.275 M/sec 1,037,707 cycles # 1.816 GHz <not counted> stalled-cycles-frontend <not counted> stalled-cycles-backend 654,499 instructions # 0.63 insns per cycle 136,129 branches # 238.286 M/sec <not counted> branch-misses <not counted> L1-dcache-loads <not counted> L1-dcache-load-misses <not counted> LLC-loads <not counted> LLC-load-misses <not counted> L1-icache-loads <not counted> L1-icache-load-misses <not counted> dTLB-loads <not counted> dTLB-load-misses <not counted> iTLB-loads <not counted> iTLB-load-misses <not counted> L1-dcache-prefetches <not counted> L1-dcache-prefetch-misses 1.001004836 seconds time elapsed After: perf stat -ddd -- sleep 1 Performance counter stats for 'sleep 1': 1.350326 task-clock # 0.001 CPUs utilized 2 context-switches # 0.001 M/sec 0 CPU-migrations # 0.000 M/sec 157 page-faults # 0.116 M/sec 11,986 cycles # 0.009 GHz <not supported> stalled-cycles-frontend <not supported> stalled-cycles-backend 496,986 instructions # 41.46 insns per cycle 138,065 branches # 102.246 M/sec 7,245 branch-misses # 5.25% of all branches <not counted> L1-dcache-loads <not counted> L1-dcache-load-misses <not counted> LLC-loads <not counted> LLC-load-misses <not counted> L1-icache-loads <not counted> L1-icache-load-misses <not counted> dTLB-loads <not counted> dTLB-load-misses <not counted> iTLB-loads <not counted> iTLB-load-misses <not counted> L1-dcache-prefetches <not supported> L1-dcache-prefetch-misses 1.002397333 seconds time elapsed v1->v2: changed supported type from int to bool v2->v3 fixed vertical alignment of new struct element Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1306767359-13221-1-git-send-email-dsahern@gmail.com Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-30 14:55:59 +00:00
#define CNTR_NOT_SUPPORTED "<not supported>"
#define CNTR_NOT_COUNTED "<not counted>"
#define FREEZE_ON_SMI_PATH "devices/cpu/freeze_on_smi"
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
static void print_counters(struct timespec *ts, int argc, const char **argv);
2013-01-29 11:47:44 +00:00
/* Default events used for perf stat -T */
static const char *transaction_attrs = {
"task-clock,"
"{"
"instructions,"
"cycles,"
"cpu/cycles-t/,"
"cpu/tx-start/,"
"cpu/el-start/,"
"cpu/cycles-ct/"
"}"
};
/* More limited version when the CPU does not have all events. */
static const char * transaction_limited_attrs = {
"task-clock,"
"{"
"instructions,"
"cycles,"
"cpu/cycles-t/,"
"cpu/tx-start/"
"}"
};
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
static const char * topdown_attrs[] = {
"topdown-total-slots",
"topdown-slots-retired",
"topdown-recovery-bubbles",
"topdown-fetch-bubbles",
"topdown-slots-issued",
NULL,
};
static const char *smi_cost_attrs = {
"{"
"msr/aperf/,"
"msr/smi/,"
"cycles"
"}"
};
static struct perf_evlist *evsel_list;
perf stat: Support JSON metrics in perf stat Add generic support for standalone metrics specified in JSON files to perf stat. A metric is a formula that uses multiple events to compute a higher level result (e.g. IPC). Previously metrics were always tied to an event and automatically enabled with that event. But now change it that we can have standalone metrics. They are in the same JSON data structure as events, but don't have an event name. We also allow to organize the metrics in metric groups, which allows a short cut to select several related metrics at once. Add a new -M / --metrics option to perf stat that adds the metrics or metric groups specified. Add the core code to manage and parse the metric groups. They are collected from the JSON data structures into a separate rblist. When computing shadow values look for metrics in that list. Then they are computed using the existing saved values infrastructure in stat-shadow.c The actual JSON metrics are in a separate pull request. % perf stat -M Summary --metric-only -a sleep 1 Performance counter stats for 'system wide': Instructions CLKS CPU_Utilization GFLOPs SMT_2T_Utilization Kernel_Utilization 317614222.0 1392930775.0 0.0 0.0 0.2 0.1 1.001497549 seconds time elapsed % perf stat -M GFLOPs flops Performance counter stats for 'flops': 3,999,541,471 fp_comp_ops_exe.sse_scalar_single # 1.2 GFLOPs (66.65%) 14 fp_comp_ops_exe.sse_scalar_double (66.65%) 0 fp_comp_ops_exe.sse_packed_double (66.67%) 0 fp_comp_ops_exe.sse_packed_single (66.70%) 0 simd_fp_256.packed_double (66.70%) 0 simd_fp_256.packed_single (66.67%) 0 duration_time 3.238372845 seconds time elapsed v2: Add missing header file v3: Move find_map to pmu.c Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170831194036.30146-7-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:31 +00:00
static struct rblist metric_events;
static struct target target = {
.uid = UINT_MAX,
};
typedef int (*aggr_get_id_t)(struct cpu_map *m, int cpu);
static int run_count = 1;
static bool no_inherit = false;
static volatile pid_t child_pid = -1;
perf: Fix endianness argument compatibility with OPT_BOOLEAN() and introduce OPT_INCR() Parsing an option from the command line with OPT_BOOLEAN on a bool data type would not work on a big-endian machine due to the manner in which the boolean was being cast into an int and incremented. For example, running 'perf probe --list' on a PowerPC machine would fail to properly set the list_events bool and would therefore print out the usage information and terminate. This patch makes OPT_BOOLEAN work as expected with a bool datatype. For cases where the original OPT_BOOLEAN was intentionally being used to increment an int each time it was passed in on the command line, this patch introduces OPT_INCR with the old behaviour of OPT_BOOLEAN (the verbose variable is currently the only such example of this). I have reviewed every use of OPT_BOOLEAN to verify that a true C99 bool was passed. Where integers were used, I verified that they were only being used for boolean logic and changed them to bools to ensure that they would not be mistakenly used as ints. The major exception was the verbose variable which now uses OPT_INCR instead of OPT_BOOLEAN. Signed-off-by: Ian Munsie <imunsie@au.ibm.com> Acked-by: David S. Miller <davem@davemloft.net> Cc: <stable@kernel.org> # NOTE: wont apply to .3[34].x cleanly, please backport Cc: Git development list <git@vger.kernel.org> Cc: Ian Munsie <imunsie@au1.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Eric B Munson <ebmunson@us.ibm.com> Cc: Valdis.Kletnieks@vt.edu Cc: WANG Cong <amwang@redhat.com> Cc: Thiago Farina <tfransosi@gmail.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Cc: Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Cc: Mike Galbraith <efault@gmx.de> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Anton Blanchard <anton@samba.org> Cc: John Kacur <jkacur@redhat.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <1271147857-11604-1-git-send-email-imunsie@au.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-04-13 08:37:33 +00:00
static bool null_run = false;
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
static int detailed_run = 0;
static bool transaction_run;
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
static bool topdown_run = false;
static bool smi_cost = false;
static bool smi_reset = false;
static bool big_num = true;
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
static int big_num_opt = -1;
static const char *csv_sep = NULL;
static bool csv_output = false;
static bool group = false;
static const char *pre_cmd = NULL;
static const char *post_cmd = NULL;
static bool sync_run = false;
perf stat: Add support for --initial-delay option When measuring workloads the startup phase -- doing page faults, dynamic linking, opening files -- is often very different from the rest of the workload. Especially with smaller kernels and using counter multiplexing this can give significant measurement errors. Multiplexing assumes that the workload is mostly the same over longer periods. But at startup there is typically some spike of activity which is relatively short. If many groups are multiplexing the one group seeing the spike, and which is then scaled up over the time to run all groups, may see a significant error. Also in general it's often not useful to measure the startup, because it is so different from the rest. One way around this is to use interval mode and discard the first sample, but this can be awkward because interval mode doesn't support intervals of less than 100ms, and also a useful interval is not necessarily the same as a useful startup delay. This patch adds a new --initial-delay / -D option to skip measuring for the startup phase. The time can be specified in ms Here's a simple example: perf stat -e page-faults bash -c 'for i in $(seq 100000) ; do true ; done' ... 3,721 page-faults ... If we just wait 20 ms the number of page faults is 1/3 less: perf stat -D 20 -e page-faults bash -c 'for i in $(seq 100000) ; do true ; done' ... 2,823 page-faults ... So we filtered out most of the startup noise from bash. Signed-off-by: Andi Kleen <ak@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1375490473-1503-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-08-03 00:41:11 +00:00
static unsigned int initial_delay = 0;
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
static unsigned int unit_width = 4; /* strlen("unit") */
static bool forever = false;
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
static bool metric_only = false;
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
static bool force_metric_only = false;
perf stat: Collapse identically named events The uncore PMU has a lot of duplicated PMUs for different subsystems. When expanding an uncore alias we usually end up with a large number of identically named aliases, which makes perf stat output difficult to read. Automatically sum them up in perf stat, unless --no-merge is specified. This can be default because only the uncores generally have duplicated aliases. Other PMUs have unique names. Before: % perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 694,976 Bytes unc_c_llc_lookup.any 706,304 Bytes unc_c_llc_lookup.any 956,608 Bytes unc_c_llc_lookup.any 782,720 Bytes unc_c_llc_lookup.any 605,696 Bytes unc_c_llc_lookup.any 442,816 Bytes unc_c_llc_lookup.any 659,328 Bytes unc_c_llc_lookup.any 509,312 Bytes unc_c_llc_lookup.any 263,936 Bytes unc_c_llc_lookup.any 592,448 Bytes unc_c_llc_lookup.any 672,448 Bytes unc_c_llc_lookup.any 608,640 Bytes unc_c_llc_lookup.any 641,024 Bytes unc_c_llc_lookup.any 856,896 Bytes unc_c_llc_lookup.any 808,832 Bytes unc_c_llc_lookup.any 684,864 Bytes unc_c_llc_lookup.any 710,464 Bytes unc_c_llc_lookup.any 538,304 Bytes unc_c_llc_lookup.any 1.002577660 seconds time elapsed After: % perf stat -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 2,685,120 Bytes unc_c_llc_lookup.any 1.002648032 seconds time elapsed v2: Split collect_aliases. Rename alias flag. v3: Make sure unsupported/not counted is always printed. v4: Factor out callback change into separate patch. v5: Move check for bad results here Move merged check into collect_data Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
static bool no_merge = false;
2013-01-29 11:47:44 +00:00
static struct timespec ref_time;
static struct cpu_map *aggr_map;
static aggr_get_id_t aggr_get_id;
static bool append_file;
perf stat: Add support to print counts for fixed times Introduce a new option to print counts for fixed number of times and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat -I 1000 --interval-count 2 -e cycles -a # time counts unit events 1.002827089 93,884,870 cycles 2.004231506 56,573,446 cycles We can just print the counts for several times with this newly introduced option. The usage of it is a little like 'vmstat', and it should be used together with "-I" option. $ vmstat -n 1 2 procs ---------memory-------------- --swap- ----io-- -system-- ------cpu--- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 78270544 547484 51732076 0 0 0 20 1 1 1 0 99 0 0 0 0 0 78270512 547484 51732080 0 0 0 16 477 1555 0 0 100 0 0 Changes since v3: - merge interval_count check and times check to one line. - fix the wrong indent in stat.h - use stat_config.times instead of 'times' in cmd_stat function. Changes since v2: - none. Changes since v1: - change the name of the new option "times-print" to "interval-count". - keep the new option interval specifically. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-2-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:22 +00:00
static bool interval_count;
static const char *output_name;
static int output_fd;
perf stat: Issue a HW watchdog disable hint When using perf stat on an AMD F15h system with the default hw events attributes, some of the events don't get counted: Performance counter stats for 'sleep 1': 0.749208 task-clock (msec) # 0.001 CPUs utilized 1 context-switches # 0.001 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.072 M/sec 1,122,815 cycles # 1.499 GHz 286,740 stalled-cycles-frontend # 25.54% frontend cycles idle <not counted> stalled-cycles-backend (0.00%) ^^^^^^^^^^^^ <not counted> instructions (0.00%) ^^^^^^^^^^^^ <not counted> branches (0.00%) <not counted> branch-misses (0.00%) 1.001550070 seconds time elapsed The reason is that we have the HW watchdog consuming one PMU counter and when perf tries to schedule 6 events on 6 counters and some of those counters are constrained to only a specific subset of PMCs by the hardware, the event scheduling fails. So issue a hint to disable the HW watchdog around a perf stat session. Committer note: Testing it... # perf stat -d usleep 1 Performance counter stats for 'usleep 1': 1.180203 task-clock (msec) # 0.490 CPUs utilized 1 context-switches # 0.847 K/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.046 M/sec 184,754 cycles # 0.157 GHz 714,553 instructions # 3.87 insn per cycle 154,661 branches # 131.046 M/sec 7,247 branch-misses # 4.69% of all branches 219,984 L1-dcache-loads # 186.395 M/sec 17,600 L1-dcache-load-misses # 8.00% of all L1-dcache hits (90.16%) <not counted> LLC-loads (0.00%) <not counted> LLC-load-misses (0.00%) 0.002406823 seconds time elapsed Some events weren't counted. Try disabling the NMI watchdog: echo 0 > /proc/sys/kernel/nmi_watchdog perf stat ... echo 1 > /proc/sys/kernel/nmi_watchdog # Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Ingo Molnar <mingo@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <rric@kernel.org> Cc: Vince Weaver <vince@deater.net> Link: http://lkml.kernel.org/r/20170211183218.ijnvb5f7ciyuunx4@pd.tnic Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-07 00:40:05 +00:00
static int print_free_counters_hint;
perf stat: add perf stat -B to pretty print large numbers It is hard to read very large numbers so provide an option to perf stat to separate thousands using a separator. The patch leverages the locale support of stdio. You need to set your LC_NUMERIC appropriately, for instance LC_NUMERIC=en_US.UTF8. You need to pass -B to activate this feature. This way existing scripts parsing the output do not need to be changed. Here is an example. $ perf stat noploop 2 noploop for 2 seconds Performance counter stats for 'noploop 2': 1998.347031 task-clock-msecs # 0.998 CPUs 61 context-switches # 0.000 M/sec 0 CPU-migrations # 0.000 M/sec 118 page-faults # 0.000 M/sec 4,138,410,900 cycles # 2070.917 M/sec (scaled from 70.01%) 2,062,650,268 instructions # 0.498 IPC (scaled from 70.01%) 2,057,653,466 branches # 1029.678 M/sec (scaled from 70.01%) 40,267 branch-misses # 0.002 % (scaled from 30.04%) 2,055,961,348 cache-references # 1028.831 M/sec (scaled from 30.03%) 53,725 cache-misses # 0.027 M/sec (scaled from 30.02%) 2.001393933 seconds time elapsed $ perf stat -B noploop 2 noploop for 2 seconds Performance counter stats for 'noploop 2': 1998.297883 task-clock-msecs # 0.998 CPUs 59 context-switches # 0.000 M/sec 0 CPU-migrations # 0.000 M/sec 119 page-faults # 0.000 M/sec 4,131,380,160 cycles # 2067.450 M/sec (scaled from 70.01%) 2,059,096,507 instructions # 0.498 IPC (scaled from 70.01%) 2,054,681,303 branches # 1028.216 M/sec (scaled from 70.01%) 25,650 branch-misses # 0.001 % (scaled from 30.05%) 2,056,283,014 cache-references # 1029.017 M/sec (scaled from 30.03%) 47,097 cache-misses # 0.024 M/sec (scaled from 30.02%) 2.001391016 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <4bf28fe8.914ed80a.01ca.fffff5f5@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-05-18 13:00:01 +00:00
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
struct perf_stat {
bool record;
struct perf_data data;
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
struct perf_session *session;
u64 bytes_written;
struct perf_tool tool;
bool maps_allocated;
struct cpu_map *cpus;
struct thread_map *threads;
enum aggr_mode aggr_mode;
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
};
static struct perf_stat perf_stat;
#define STAT_RECORD perf_stat.record
static volatile int done = 0;
static struct perf_stat_config stat_config = {
.aggr_mode = AGGR_GLOBAL,
.scale = true,
};
static bool is_duration_time(struct perf_evsel *evsel)
{
return !strcmp(evsel->name, "duration_time");
}
2013-01-29 11:47:44 +00:00
static inline void diff_timespec(struct timespec *r, struct timespec *a,
struct timespec *b)
{
r->tv_sec = a->tv_sec - b->tv_sec;
if (a->tv_nsec < b->tv_nsec) {
r->tv_nsec = a->tv_nsec + NSEC_PER_SEC - b->tv_nsec;
2013-01-29 11:47:44 +00:00
r->tv_sec--;
} else {
r->tv_nsec = a->tv_nsec - b->tv_nsec ;
}
}
static void perf_stat__reset_stats(void)
{
int i;
perf_evlist__reset_stats(evsel_list);
perf_stat__reset_shadow_stats();
for (i = 0; i < stat_config.stats_num; i++)
perf_stat__reset_shadow_per_stat(&stat_config.stats[i]);
}
static int create_perf_stat_counter(struct perf_evsel *evsel)
{
struct perf_event_attr *attr = &evsel->attr;
struct perf_evsel *leader = evsel->leader;
if (stat_config.scale) {
attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
PERF_FORMAT_TOTAL_TIME_RUNNING;
}
/*
* The event is part of non trivial group, let's enable
* the group read (for leader) and ID retrieval for all
* members.
*/
if (leader->nr_members > 1)
attr->read_format |= PERF_FORMAT_ID|PERF_FORMAT_GROUP;
attr->inherit = !no_inherit;
/*
* Some events get initialized with sample_(period/type) set,
* like tracepoints. Clear it up for counting.
*/
attr->sample_period = 0;
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
/*
* But set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
* while avoiding that older tools show confusing messages.
*
* However for pipe sessions we need to keep it zero,
* because script's perf_evsel__check_attr is triggered
* by attr->sample_type != 0, and we can't run it on
* stat sessions.
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
*/
if (!(STAT_RECORD && perf_stat.data.is_pipe))
attr->sample_type = PERF_SAMPLE_IDENTIFIER;
/*
* Disabling all counters initially, they will be enabled
* either manually by us or by kernel via enable_on_exec
* set later.
*/
if (perf_evsel__is_group_leader(evsel)) {
attr->disabled = 1;
/*
* In case of initial_delay we enable tracee
* events manually.
*/
if (target__none(&target) && !initial_delay)
perf stat: Add support for --initial-delay option When measuring workloads the startup phase -- doing page faults, dynamic linking, opening files -- is often very different from the rest of the workload. Especially with smaller kernels and using counter multiplexing this can give significant measurement errors. Multiplexing assumes that the workload is mostly the same over longer periods. But at startup there is typically some spike of activity which is relatively short. If many groups are multiplexing the one group seeing the spike, and which is then scaled up over the time to run all groups, may see a significant error. Also in general it's often not useful to measure the startup, because it is so different from the rest. One way around this is to use interval mode and discard the first sample, but this can be awkward because interval mode doesn't support intervals of less than 100ms, and also a useful interval is not necessarily the same as a useful startup delay. This patch adds a new --initial-delay / -D option to skip measuring for the startup phase. The time can be specified in ms Here's a simple example: perf stat -e page-faults bash -c 'for i in $(seq 100000) ; do true ; done' ... 3,721 page-faults ... If we just wait 20 ms the number of page faults is 1/3 less: perf stat -D 20 -e page-faults bash -c 'for i in $(seq 100000) ; do true ; done' ... 2,823 page-faults ... So we filtered out most of the startup noise from bash. Signed-off-by: Andi Kleen <ak@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1375490473-1503-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-08-03 00:41:11 +00:00
attr->enable_on_exec = 1;
}
if (target__has_cpu(&target) && !target__has_per_thread(&target))
return perf_evsel__open_per_cpu(evsel, perf_evsel__cpus(evsel));
return perf_evsel__open_per_thread(evsel, evsel_list->threads);
}
/*
* Does the counter have nsecs as a unit?
*/
static inline int nsec_counter(struct perf_evsel *evsel)
{
if (perf_evsel__match(evsel, SOFTWARE, SW_CPU_CLOCK) ||
perf_evsel__match(evsel, SOFTWARE, SW_TASK_CLOCK))
return 1;
return 0;
}
static int process_synthesized_event(struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
{
if (perf_data__write(&perf_stat.data, event, event->header.size) < 0) {
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
pr_err("failed to write perf data, error: %m\n");
return -1;
}
perf_stat.bytes_written += event->header.size;
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
return 0;
}
static int write_stat_round_event(u64 tm, u64 type)
{
return perf_event__synthesize_stat_round(NULL, tm, type,
process_synthesized_event,
NULL);
}
#define WRITE_STAT_ROUND_EVENT(time, interval) \
write_stat_round_event(time, PERF_STAT_ROUND_TYPE__ ## interval)
perf stat record: Write stat events on record Writing stat events on 'perf stat record' at the time we read counter values from kernel. Committer note: After the patch: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.598006 task-clock (msec) # 0.484 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.087 M/sec 882,744 cycles # 1.476 GHz 581,416 stalled-cycles-frontend # 65.86% frontend cycles idle <not supported> stalled-cycles-backend 636,479 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 129,334 branches # 216.275 M/sec 7,512 branch-misses # 5.81% of all branches 0.001235157 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ perf report -D | grep PERF_RECORD 0x5b0 [0x28]: PERF_RECORD_THREAD_MAP nr: 1 thread: 5504 0x5d8 [0x12]: PERF_RECORD_CPU_MAP nr: 1 cpu: 65535 0x5ea [0x40]: PERF_RECORD_STAT_CONFIG 0x62a [0x30]: PERF_RECORD_STAT 0x65a [0x30]: PERF_RECORD_STAT 0x68a [0x30]: PERF_RECORD_STAT 0x6ba [0x30]: PERF_RECORD_STAT 0x6ea [0x30]: PERF_RECORD_STAT 0x71a [0x30]: PERF_RECORD_STAT 0x74a [0x30]: PERF_RECORD_STAT 0x77a [0x30]: PERF_RECORD_STAT 0x7aa [0x30]: PERF_RECORD_STAT -1 -1 0x7da [0x40]: PERF_RECORD_MMAP -1/0: [0xffffffff81000000(0x1f000000) @ 0xffffffff81000000]: x [kernel.kallsyms]_text $ Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1446734469-11352-8-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:51 +00:00
#define SID(e, x, y) xyarray__entry(e->sample_id, x, y)
static int
perf_evsel__write_stat_event(struct perf_evsel *counter, u32 cpu, u32 thread,
struct perf_counts_values *count)
{
struct perf_sample_id *sid = SID(counter, cpu, thread);
return perf_event__synthesize_stat(NULL, cpu, thread, sid->id, count,
process_synthesized_event, NULL);
}
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
/*
* Read out the results of a single counter:
* do not aggregate counts across CPUs in system-wide mode
*/
static int read_counter(struct perf_evsel *counter)
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
{
int nthreads = thread_map__nr(evsel_list->threads);
int ncpus, cpu, thread;
if (target__has_cpu(&target) && !target__has_per_thread(&target))
ncpus = perf_evsel__nr_cpus(counter);
else
ncpus = 1;
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
if (!counter->supported)
return -ENOENT;
if (counter->system_wide)
nthreads = 1;
for (thread = 0; thread < nthreads; thread++) {
for (cpu = 0; cpu < ncpus; cpu++) {
struct perf_counts_values *count;
count = perf_counts(counter->counts, cpu, thread);
/*
* The leader's group read loads data into its group members
* (via perf_evsel__read_counter) and sets threir count->loaded.
*/
if (!count->loaded &&
perf_evsel__read_counter(counter, cpu, thread)) {
counter->counts->scaled = -1;
perf_counts(counter->counts, cpu, thread)->ena = 0;
perf_counts(counter->counts, cpu, thread)->run = 0;
return -1;
}
perf stat record: Write stat events on record Writing stat events on 'perf stat record' at the time we read counter values from kernel. Committer note: After the patch: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.598006 task-clock (msec) # 0.484 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.087 M/sec 882,744 cycles # 1.476 GHz 581,416 stalled-cycles-frontend # 65.86% frontend cycles idle <not supported> stalled-cycles-backend 636,479 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 129,334 branches # 216.275 M/sec 7,512 branch-misses # 5.81% of all branches 0.001235157 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ perf report -D | grep PERF_RECORD 0x5b0 [0x28]: PERF_RECORD_THREAD_MAP nr: 1 thread: 5504 0x5d8 [0x12]: PERF_RECORD_CPU_MAP nr: 1 cpu: 65535 0x5ea [0x40]: PERF_RECORD_STAT_CONFIG 0x62a [0x30]: PERF_RECORD_STAT 0x65a [0x30]: PERF_RECORD_STAT 0x68a [0x30]: PERF_RECORD_STAT 0x6ba [0x30]: PERF_RECORD_STAT 0x6ea [0x30]: PERF_RECORD_STAT 0x71a [0x30]: PERF_RECORD_STAT 0x74a [0x30]: PERF_RECORD_STAT 0x77a [0x30]: PERF_RECORD_STAT 0x7aa [0x30]: PERF_RECORD_STAT -1 -1 0x7da [0x40]: PERF_RECORD_MMAP -1/0: [0xffffffff81000000(0x1f000000) @ 0xffffffff81000000]: x [kernel.kallsyms]_text $ Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1446734469-11352-8-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:51 +00:00
count->loaded = false;
perf stat record: Write stat events on record Writing stat events on 'perf stat record' at the time we read counter values from kernel. Committer note: After the patch: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.598006 task-clock (msec) # 0.484 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.087 M/sec 882,744 cycles # 1.476 GHz 581,416 stalled-cycles-frontend # 65.86% frontend cycles idle <not supported> stalled-cycles-backend 636,479 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 129,334 branches # 216.275 M/sec 7,512 branch-misses # 5.81% of all branches 0.001235157 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ perf report -D | grep PERF_RECORD 0x5b0 [0x28]: PERF_RECORD_THREAD_MAP nr: 1 thread: 5504 0x5d8 [0x12]: PERF_RECORD_CPU_MAP nr: 1 cpu: 65535 0x5ea [0x40]: PERF_RECORD_STAT_CONFIG 0x62a [0x30]: PERF_RECORD_STAT 0x65a [0x30]: PERF_RECORD_STAT 0x68a [0x30]: PERF_RECORD_STAT 0x6ba [0x30]: PERF_RECORD_STAT 0x6ea [0x30]: PERF_RECORD_STAT 0x71a [0x30]: PERF_RECORD_STAT 0x74a [0x30]: PERF_RECORD_STAT 0x77a [0x30]: PERF_RECORD_STAT 0x7aa [0x30]: PERF_RECORD_STAT -1 -1 0x7da [0x40]: PERF_RECORD_MMAP -1/0: [0xffffffff81000000(0x1f000000) @ 0xffffffff81000000]: x [kernel.kallsyms]_text $ Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1446734469-11352-8-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:51 +00:00
if (STAT_RECORD) {
if (perf_evsel__write_stat_event(counter, cpu, thread, count)) {
pr_err("failed to write stat event\n");
return -1;
}
}
if (verbose > 1) {
fprintf(stat_config.output,
"%s: %d: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
perf_evsel__name(counter),
cpu,
count->val, count->ena, count->run);
}
}
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
}
return 0;
}
perf stat: Avoid skew when reading events When we don't have a tracee (i.e. we're attaching to a task or CPU), counters can still be running after our workload finishes, and can still be running as we read their values. As we read events one-by-one, there can be arbitrary skew between values of events, even within a group. This means that ratios within an event group are not reliable. This skew can be seen if measuring a group of identical events, e.g: # perf stat -a -C0 -e '{cycles,cycles}' sleep 1 To avoid this, we must stop groups from counting before we read the values of any constituent events. This patch adds and makes use of a new disable_counters() helper, which disables group leaders (and thus each group as a whole). This mirrors the use of enable_counters() for starting event groups in the absence of a tracee. Closing a group leader splits the group, and without a disabled group leader the newly split events will begin counting. Thus to ensure counts are reliable we must defer closing group leaders until all counts have been read. To do so this patch removes the event closing logic from the read_counters() helper, explicitly closes the events using perf_evlist__close(), which also aids legibility. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
static void read_counters(void)
2013-01-29 11:47:44 +00:00
{
struct perf_evsel *counter;
int ret;
2013-01-29 11:47:44 +00:00
evlist__for_each_entry(evsel_list, counter) {
ret = read_counter(counter);
if (ret)
pr_debug("failed to read counter %s\n", counter->name);
if (ret == 0 && perf_stat_process_counter(&stat_config, counter))
pr_warning("failed to process counter %s\n", counter->name);
2013-01-29 11:47:44 +00:00
}
}
static void process_interval(void)
{
struct timespec ts, rs;
perf stat: Avoid skew when reading events When we don't have a tracee (i.e. we're attaching to a task or CPU), counters can still be running after our workload finishes, and can still be running as we read their values. As we read events one-by-one, there can be arbitrary skew between values of events, even within a group. This means that ratios within an event group are not reliable. This skew can be seen if measuring a group of identical events, e.g: # perf stat -a -C0 -e '{cycles,cycles}' sleep 1 To avoid this, we must stop groups from counting before we read the values of any constituent events. This patch adds and makes use of a new disable_counters() helper, which disables group leaders (and thus each group as a whole). This mirrors the use of enable_counters() for starting event groups in the absence of a tracee. Closing a group leader splits the group, and without a disabled group leader the newly split events will begin counting. Thus to ensure counts are reliable we must defer closing group leaders until all counts have been read. To do so this patch removes the event closing logic from the read_counters() helper, explicitly closes the events using perf_evlist__close(), which also aids legibility. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
read_counters();
2013-01-29 11:47:44 +00:00
clock_gettime(CLOCK_MONOTONIC, &ts);
diff_timespec(&rs, &ts, &ref_time);
if (STAT_RECORD) {
if (WRITE_STAT_ROUND_EVENT(rs.tv_sec * NSEC_PER_SEC + rs.tv_nsec, INTERVAL))
pr_err("failed to write stat round event\n");
}
init_stats(&walltime_nsecs_stats);
update_stats(&walltime_nsecs_stats, stat_config.interval * 1000000);
print_counters(&rs, 0, NULL);
2013-01-29 11:47:44 +00:00
}
static void enable_counters(void)
perf stat: Add support for --initial-delay option When measuring workloads the startup phase -- doing page faults, dynamic linking, opening files -- is often very different from the rest of the workload. Especially with smaller kernels and using counter multiplexing this can give significant measurement errors. Multiplexing assumes that the workload is mostly the same over longer periods. But at startup there is typically some spike of activity which is relatively short. If many groups are multiplexing the one group seeing the spike, and which is then scaled up over the time to run all groups, may see a significant error. Also in general it's often not useful to measure the startup, because it is so different from the rest. One way around this is to use interval mode and discard the first sample, but this can be awkward because interval mode doesn't support intervals of less than 100ms, and also a useful interval is not necessarily the same as a useful startup delay. This patch adds a new --initial-delay / -D option to skip measuring for the startup phase. The time can be specified in ms Here's a simple example: perf stat -e page-faults bash -c 'for i in $(seq 100000) ; do true ; done' ... 3,721 page-faults ... If we just wait 20 ms the number of page faults is 1/3 less: perf stat -D 20 -e page-faults bash -c 'for i in $(seq 100000) ; do true ; done' ... 2,823 page-faults ... So we filtered out most of the startup noise from bash. Signed-off-by: Andi Kleen <ak@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1375490473-1503-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-08-03 00:41:11 +00:00
{
if (initial_delay)
usleep(initial_delay * USEC_PER_MSEC);
/*
* We need to enable counters only if:
* - we don't have tracee (attaching to task or cpu)
* - we have initial delay configured
*/
if (!target__none(&target) || initial_delay)
perf_evlist__enable(evsel_list);
perf stat: Add support for --initial-delay option When measuring workloads the startup phase -- doing page faults, dynamic linking, opening files -- is often very different from the rest of the workload. Especially with smaller kernels and using counter multiplexing this can give significant measurement errors. Multiplexing assumes that the workload is mostly the same over longer periods. But at startup there is typically some spike of activity which is relatively short. If many groups are multiplexing the one group seeing the spike, and which is then scaled up over the time to run all groups, may see a significant error. Also in general it's often not useful to measure the startup, because it is so different from the rest. One way around this is to use interval mode and discard the first sample, but this can be awkward because interval mode doesn't support intervals of less than 100ms, and also a useful interval is not necessarily the same as a useful startup delay. This patch adds a new --initial-delay / -D option to skip measuring for the startup phase. The time can be specified in ms Here's a simple example: perf stat -e page-faults bash -c 'for i in $(seq 100000) ; do true ; done' ... 3,721 page-faults ... If we just wait 20 ms the number of page faults is 1/3 less: perf stat -D 20 -e page-faults bash -c 'for i in $(seq 100000) ; do true ; done' ... 2,823 page-faults ... So we filtered out most of the startup noise from bash. Signed-off-by: Andi Kleen <ak@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1375490473-1503-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-08-03 00:41:11 +00:00
}
perf stat: Avoid skew when reading events When we don't have a tracee (i.e. we're attaching to a task or CPU), counters can still be running after our workload finishes, and can still be running as we read their values. As we read events one-by-one, there can be arbitrary skew between values of events, even within a group. This means that ratios within an event group are not reliable. This skew can be seen if measuring a group of identical events, e.g: # perf stat -a -C0 -e '{cycles,cycles}' sleep 1 To avoid this, we must stop groups from counting before we read the values of any constituent events. This patch adds and makes use of a new disable_counters() helper, which disables group leaders (and thus each group as a whole). This mirrors the use of enable_counters() for starting event groups in the absence of a tracee. Closing a group leader splits the group, and without a disabled group leader the newly split events will begin counting. Thus to ensure counts are reliable we must defer closing group leaders until all counts have been read. To do so this patch removes the event closing logic from the read_counters() helper, explicitly closes the events using perf_evlist__close(), which also aids legibility. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
static void disable_counters(void)
{
/*
* If we don't have tracee (attaching to task or cpu), counters may
* still be running. To get accurate group ratios, we must stop groups
* from counting before reading their constituent counters.
*/
if (!target__none(&target))
perf_evlist__disable(evsel_list);
}
static volatile int workload_exec_errno;
/*
* perf_evlist__prepare_workload will send a SIGUSR1
* if the fork fails, since we asked by setting its
* want_signal to true.
*/
static void workload_exec_failed_signal(int signo __maybe_unused, siginfo_t *info,
void *ucontext __maybe_unused)
{
workload_exec_errno = info->si_value.sival_int;
}
static int perf_stat_synthesize_config(bool is_pipe)
{
int err;
if (is_pipe) {
err = perf_event__synthesize_attrs(NULL, perf_stat.session,
process_synthesized_event);
if (err < 0) {
pr_err("Couldn't synthesize attrs.\n");
return err;
}
}
err = perf_event__synthesize_extra_attr(NULL,
evsel_list,
process_synthesized_event,
is_pipe);
err = perf_event__synthesize_thread_map2(NULL, evsel_list->threads,
process_synthesized_event,
NULL);
if (err < 0) {
pr_err("Couldn't synthesize thread map.\n");
return err;
}
err = perf_event__synthesize_cpu_map(NULL, evsel_list->cpus,
process_synthesized_event, NULL);
if (err < 0) {
pr_err("Couldn't synthesize thread map.\n");
return err;
}
err = perf_event__synthesize_stat_config(NULL, &stat_config,
process_synthesized_event, NULL);
if (err < 0) {
pr_err("Couldn't synthesize config.\n");
return err;
}
return 0;
}
#define FD(e, x, y) (*(int *)xyarray__entry(e->fd, x, y))
static int __store_counter_ids(struct perf_evsel *counter)
{
int cpu, thread;
for (cpu = 0; cpu < xyarray__max_x(counter->fd); cpu++) {
for (thread = 0; thread < xyarray__max_y(counter->fd);
thread++) {
int fd = FD(counter, cpu, thread);
if (perf_evlist__id_add_fd(evsel_list, counter,
cpu, thread, fd) < 0)
return -1;
}
}
return 0;
}
static int store_counter_ids(struct perf_evsel *counter)
{
struct cpu_map *cpus = counter->cpus;
struct thread_map *threads = counter->threads;
if (perf_evsel__alloc_id(counter, cpus->nr, threads->nr))
return -ENOMEM;
return __store_counter_ids(counter);
}
static bool perf_evsel__should_store_id(struct perf_evsel *counter)
{
return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID;
}
perf tools: Support weak groups in 'perf stat' Setting up groups can be complicated due to the complicated scheduling restrictions of different PMUs. User tools usually don't understand all these restrictions. Still in many cases it is useful to set up groups and they work most of the time. However if the group is set up wrong some members will not report any value because they never get scheduled. Add a concept of a 'weak group': try to set up a group, but if it's not schedulable fallback to not using a group. That gives us the best of both worlds: groups if they work, but still a usable fallback if they don't. In theory it would be possible to have more complex fallback strategies (e.g. try to split the group in half), but the simple fallback of not using a group seems to work for now. So far the weak group is only implemented for perf stat, not for record. Here's an unschedulable group (on IvyBridge with SMT on) % perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1 73,806,067 branches 4,848,144 branch-misses # 6.57% of all branches 14,754,458 l1d.replacement 24,905,558 l2_lines_in.all <not supported> l2_rqsts.all_code_rd <------- will never report anything With the weak group: % perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}:W' -a sleep 1 125,366,055 branches (80.02%) 9,208,402 branch-misses # 7.35% of all branches (80.01%) 24,560,249 l1d.replacement (80.00%) 43,174,971 l2_lines_in.all (80.05%) 31,891,457 l2_rqsts.all_code_rd (79.92%) The extra event scheduled with some extra multiplexing v2: Move fallback code to separate function. Add comment on for_each_group_member Adjust to new perf_evsel__close interface v3: Fix debug print out. Committer testing: Before: # perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1 Performance counter stats for 'system wide': <not counted> branches <not counted> branch-misses <not counted> l1d.replacement <not counted> l2_lines_in.all <not supported> l2_rqsts.all_code_rd 1.002147212 seconds time elapsed # perf stat -e '{branches,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1 Performance counter stats for 'system wide': 83,207,892 branches 11,065,444 l1d.replacement 28,484,024 l2_lines_in.all 12,186,179 l2_rqsts.all_code_rd 1.001739493 seconds time elapsed After: # perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}':W -a sleep 1 Performance counter stats for 'system wide': 543,323,909 branches (80.01%) 27,100,512 branch-misses # 4.99% of all branches (80.02%) 50,402,905 l1d.replacement (80.03%) 67,385,892 l2_lines_in.all (80.01%) 21,352,885 l2_rqsts.all_code_rd (79.94%) 1.001086658 seconds time elapsed # Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/20170831194036.30146-2-andi@firstfloor.org [ Add a "'perf stat' only, for now" comment in the man page, suggested by Jiri ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:26 +00:00
static struct perf_evsel *perf_evsel__reset_weak_group(struct perf_evsel *evsel)
{
struct perf_evsel *c2, *leader;
bool is_open = true;
leader = evsel->leader;
pr_debug("Weak group for %s/%d failed\n",
leader->name, leader->nr_members);
/*
* for_each_group_member doesn't work here because it doesn't
* include the first entry.
*/
evlist__for_each_entry(evsel_list, c2) {
if (c2 == evsel)
is_open = false;
if (c2->leader == leader) {
if (is_open)
perf_evsel__close(c2);
c2->leader = c2;
c2->nr_members = 0;
}
}
return leader;
}
static int __run_perf_stat(int argc, const char **argv)
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
{
int interval = stat_config.interval;
perf stat: Add support to print counts for fixed times Introduce a new option to print counts for fixed number of times and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat -I 1000 --interval-count 2 -e cycles -a # time counts unit events 1.002827089 93,884,870 cycles 2.004231506 56,573,446 cycles We can just print the counts for several times with this newly introduced option. The usage of it is a little like 'vmstat', and it should be used together with "-I" option. $ vmstat -n 1 2 procs ---------memory-------------- --swap- ----io-- -system-- ------cpu--- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 78270544 547484 51732076 0 0 0 20 1 1 1 0 99 0 0 0 0 0 78270512 547484 51732080 0 0 0 16 477 1555 0 0 100 0 0 Changes since v3: - merge interval_count check and times check to one line. - fix the wrong indent in stat.h - use stat_config.times instead of 'times' in cmd_stat function. Changes since v2: - none. Changes since v1: - change the name of the new option "times-print" to "interval-count". - keep the new option interval specifically. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-2-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:22 +00:00
int times = stat_config.times;
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
int timeout = stat_config.timeout;
char msg[BUFSIZ];
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
unsigned long long t0, t1;
struct perf_evsel *counter;
2013-01-29 11:47:44 +00:00
struct timespec ts;
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
size_t l;
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
int status = 0;
const bool forks = (argc > 0);
bool is_pipe = STAT_RECORD ? perf_stat.data.is_pipe : false;
struct perf_evsel_config_term *err_term;
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
2013-01-29 11:47:44 +00:00
if (interval) {
ts.tv_sec = interval / USEC_PER_MSEC;
ts.tv_nsec = (interval % USEC_PER_MSEC) * NSEC_PER_MSEC;
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
} else if (timeout) {
ts.tv_sec = timeout / USEC_PER_MSEC;
ts.tv_nsec = (timeout % USEC_PER_MSEC) * NSEC_PER_MSEC;
2013-01-29 11:47:44 +00:00
} else {
ts.tv_sec = 1;
ts.tv_nsec = 0;
}
if (forks) {
if (perf_evlist__prepare_workload(evsel_list, &target, argv, is_pipe,
workload_exec_failed_signal) < 0) {
perror("failed to prepare workload");
return -1;
}
child_pid = evsel_list->workload.pid;
perf_counter tools: Reduce perf stat measurement overhead/skew Vince Weaver reported a 'perf stat' measurement overhead in the count of retired instructions, which can amount to a +6000 instructions inflated count in the reported count. At present, perf stat creates its counters on the perf process. Thus the counters count the fork and various other activity in both the parent and child, such as the resolver overhead for resolving PLT entries for any libc functions that haven't been called before, such as execvp. This reduces the overhead by creating the counters on the child process after the fork, using a couple of pipes to synchronize so that the child process waits until the parent has created the counters before doing the exec. To eliminate the PLT resolution overhead on calling execvp, this does a dummy execvp first which will always fail. With this, the overhead of executing a program goes down from over 4800 instructions to about 90 instructions on powerpc (32-bit). This was measured with a statically-linked program written in assembler which only does the 3 instructions needed to call _exit(0). Before: $ perf stat -e 0:1:u ./three Performance counter stats for './three': 4858 instructions 0.001274523 seconds time elapsed After: $ perf stat -e 0:1:u ./three Performance counter stats for './three': 92 instructions 0.000468153 seconds time elapsed Reported-by: Vince Weaver <vince@deater.net> Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <19016.41425.814043.870352@cargo.ozlabs.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-29 11:13:21 +00:00
}
perf tools: Enable grouping logic for parsed events This patch adds a functionality that allows to create event groups based on the way they are specified on the command line. Adding functionality to the '{}' group syntax introduced in earlier patch. The current '--group/-g' option behaviour remains intact. If you specify it for record/stat/top command, all the specified events become members of a single group with the first event as a group leader. With the new '{}' group syntax you can create group like: # perf record -e '{cycles,faults}' ls resulting in single event group containing 'cycles' and 'faults' events, with cycles event as group leader. All groups are created with regards to threads and cpus. Thus recording an event group within a 2 threads on server with 4 CPUs will create 8 separate groups. Examples (first event in brackets is group leader): # 1 group (cpu-clock,task-clock) perf record --group -e cpu-clock,task-clock ls perf record -e '{cpu-clock,task-clock}' ls # 2 groups (cpu-clock,task-clock) (minor-faults,major-faults) perf record -e '{cpu-clock,task-clock},{minor-faults,major-faults}' ls # 1 group (cpu-clock,task-clock,minor-faults,major-faults) perf record --group -e cpu-clock,task-clock -e minor-faults,major-faults ls perf record -e '{cpu-clock,task-clock,minor-faults,major-faults}' ls # 2 groups (cpu-clock,task-clock) (minor-faults,major-faults) perf record -e '{cpu-clock,task-clock} -e '{minor-faults,major-faults}' \ -e instructions ls # 1 group # (cpu-clock,task-clock,minor-faults,major-faults,instructions) perf record --group -e cpu-clock,task-clock \ -e minor-faults,major-faults -e instructions ls perf record -e '{cpu-clock,task-clock,minor-faults,major-faults,instructions}' ls It's possible to use standard event modifier for a group, which spans over all events in the group and updates each event modifier settings, for example: # perf record -r '{faults:k,cache-references}:p' resulting in ':kp' modifier being used for 'faults' and ':p' modifier being used for 'cache-references' event. Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ulrich Drepper <drepper@gmail.com> Link: http://lkml.kernel.org/n/tip-ho42u0wcr8mn1otkalqi13qp@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-08-08 10:22:36 +00:00
if (group)
perf_evlist__set_leader(evsel_list);
perf tools: Enable grouping logic for parsed events This patch adds a functionality that allows to create event groups based on the way they are specified on the command line. Adding functionality to the '{}' group syntax introduced in earlier patch. The current '--group/-g' option behaviour remains intact. If you specify it for record/stat/top command, all the specified events become members of a single group with the first event as a group leader. With the new '{}' group syntax you can create group like: # perf record -e '{cycles,faults}' ls resulting in single event group containing 'cycles' and 'faults' events, with cycles event as group leader. All groups are created with regards to threads and cpus. Thus recording an event group within a 2 threads on server with 4 CPUs will create 8 separate groups. Examples (first event in brackets is group leader): # 1 group (cpu-clock,task-clock) perf record --group -e cpu-clock,task-clock ls perf record -e '{cpu-clock,task-clock}' ls # 2 groups (cpu-clock,task-clock) (minor-faults,major-faults) perf record -e '{cpu-clock,task-clock},{minor-faults,major-faults}' ls # 1 group (cpu-clock,task-clock,minor-faults,major-faults) perf record --group -e cpu-clock,task-clock -e minor-faults,major-faults ls perf record -e '{cpu-clock,task-clock,minor-faults,major-faults}' ls # 2 groups (cpu-clock,task-clock) (minor-faults,major-faults) perf record -e '{cpu-clock,task-clock} -e '{minor-faults,major-faults}' \ -e instructions ls # 1 group # (cpu-clock,task-clock,minor-faults,major-faults,instructions) perf record --group -e cpu-clock,task-clock \ -e minor-faults,major-faults -e instructions ls perf record -e '{cpu-clock,task-clock,minor-faults,major-faults,instructions}' ls It's possible to use standard event modifier for a group, which spans over all events in the group and updates each event modifier settings, for example: # perf record -r '{faults:k,cache-references}:p' resulting in ':kp' modifier being used for 'faults' and ':p' modifier being used for 'cache-references' event. Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ulrich Drepper <drepper@gmail.com> Link: http://lkml.kernel.org/n/tip-ho42u0wcr8mn1otkalqi13qp@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-08-08 10:22:36 +00:00
evlist__for_each_entry(evsel_list, counter) {
try_again:
if (create_perf_stat_counter(counter) < 0) {
perf tools: Support weak groups in 'perf stat' Setting up groups can be complicated due to the complicated scheduling restrictions of different PMUs. User tools usually don't understand all these restrictions. Still in many cases it is useful to set up groups and they work most of the time. However if the group is set up wrong some members will not report any value because they never get scheduled. Add a concept of a 'weak group': try to set up a group, but if it's not schedulable fallback to not using a group. That gives us the best of both worlds: groups if they work, but still a usable fallback if they don't. In theory it would be possible to have more complex fallback strategies (e.g. try to split the group in half), but the simple fallback of not using a group seems to work for now. So far the weak group is only implemented for perf stat, not for record. Here's an unschedulable group (on IvyBridge with SMT on) % perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1 73,806,067 branches 4,848,144 branch-misses # 6.57% of all branches 14,754,458 l1d.replacement 24,905,558 l2_lines_in.all <not supported> l2_rqsts.all_code_rd <------- will never report anything With the weak group: % perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}:W' -a sleep 1 125,366,055 branches (80.02%) 9,208,402 branch-misses # 7.35% of all branches (80.01%) 24,560,249 l1d.replacement (80.00%) 43,174,971 l2_lines_in.all (80.05%) 31,891,457 l2_rqsts.all_code_rd (79.92%) The extra event scheduled with some extra multiplexing v2: Move fallback code to separate function. Add comment on for_each_group_member Adjust to new perf_evsel__close interface v3: Fix debug print out. Committer testing: Before: # perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1 Performance counter stats for 'system wide': <not counted> branches <not counted> branch-misses <not counted> l1d.replacement <not counted> l2_lines_in.all <not supported> l2_rqsts.all_code_rd 1.002147212 seconds time elapsed # perf stat -e '{branches,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1 Performance counter stats for 'system wide': 83,207,892 branches 11,065,444 l1d.replacement 28,484,024 l2_lines_in.all 12,186,179 l2_rqsts.all_code_rd 1.001739493 seconds time elapsed After: # perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}':W -a sleep 1 Performance counter stats for 'system wide': 543,323,909 branches (80.01%) 27,100,512 branch-misses # 4.99% of all branches (80.02%) 50,402,905 l1d.replacement (80.03%) 67,385,892 l2_lines_in.all (80.01%) 21,352,885 l2_rqsts.all_code_rd (79.94%) 1.001086658 seconds time elapsed # Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/20170831194036.30146-2-andi@firstfloor.org [ Add a "'perf stat' only, for now" comment in the man page, suggested by Jiri ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:26 +00:00
/* Weak group failed. Reset the group. */
perf stat: Fall weak group back even for EBADF It's not possible to run a package event and a per cpu event in the same group. This is used by some of the power metrics. They work correctly when not using a group. Normally weak groups should handle that, but in this case EBADF is returned instead of the normal EINVAL. $ strace -e perf_event_open ./perf stat -v -e '{cstate_pkg/c2-residency/,msr/tsc/}:W' -a sleep 1 Using CPUID GenuineIntel-6-3E perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, PERF_FLAG_FD_CLOEXEC) = -1 EINVAL (Invalid argument) perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, 0) = -1 EINVAL (Invalid argument) perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, 0) = -1 EINVAL (Invalid argument) perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, 0) = -1 EINVAL (Invalid argument) perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, 0) = 3 perf_event_open({type=0x7 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, 3, 0) = 4 perf_event_open({type=0x7 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 1, 0, 0) = -1 EBADF (Bad file descriptor) and perf errors out. Make weak groups trigger a fall back for EBADF too. Then this case works correctly: $ perf stat -v -e '{cstate_pkg/c2-residency/,msr/tsc/}:W' -a sleep 1 Using CPUID GenuineIntel-6-3E Weak group for cstate_pkg/c2-residency//2 failed cstate_pkg/c2-residency/: 476709882 1000598460 1000598460 msr/tsc/: 39625837911 12007369110 12007369110 Performance counter stats for 'system wide': 476,709,882 cstate_pkg/c2-residency/ 39,625,837,911 msr/tsc/ 1.000697588 seconds time elapsed This fixes perf stat -M Power ... $ perf stat -M Power --metric-only -a sleep 1 Performance counter stats for 'system wide': Turbo_Utilization C3_Core_Residency C6_Core_Residency C7_Core_Residency C2_Pkg_Residency C3_Pkg_Residency C6_Pkg_Residency C7_Pkg_Residency 1.0 0.7 30.0 0.0 0.9 0.1 0.4 0.0 1.001240740 seconds time elapsed Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170905211324.32427-1-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-09-05 21:13:24 +00:00
if ((errno == EINVAL || errno == EBADF) &&
perf tools: Support weak groups in 'perf stat' Setting up groups can be complicated due to the complicated scheduling restrictions of different PMUs. User tools usually don't understand all these restrictions. Still in many cases it is useful to set up groups and they work most of the time. However if the group is set up wrong some members will not report any value because they never get scheduled. Add a concept of a 'weak group': try to set up a group, but if it's not schedulable fallback to not using a group. That gives us the best of both worlds: groups if they work, but still a usable fallback if they don't. In theory it would be possible to have more complex fallback strategies (e.g. try to split the group in half), but the simple fallback of not using a group seems to work for now. So far the weak group is only implemented for perf stat, not for record. Here's an unschedulable group (on IvyBridge with SMT on) % perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1 73,806,067 branches 4,848,144 branch-misses # 6.57% of all branches 14,754,458 l1d.replacement 24,905,558 l2_lines_in.all <not supported> l2_rqsts.all_code_rd <------- will never report anything With the weak group: % perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}:W' -a sleep 1 125,366,055 branches (80.02%) 9,208,402 branch-misses # 7.35% of all branches (80.01%) 24,560,249 l1d.replacement (80.00%) 43,174,971 l2_lines_in.all (80.05%) 31,891,457 l2_rqsts.all_code_rd (79.92%) The extra event scheduled with some extra multiplexing v2: Move fallback code to separate function. Add comment on for_each_group_member Adjust to new perf_evsel__close interface v3: Fix debug print out. Committer testing: Before: # perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1 Performance counter stats for 'system wide': <not counted> branches <not counted> branch-misses <not counted> l1d.replacement <not counted> l2_lines_in.all <not supported> l2_rqsts.all_code_rd 1.002147212 seconds time elapsed # perf stat -e '{branches,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1 Performance counter stats for 'system wide': 83,207,892 branches 11,065,444 l1d.replacement 28,484,024 l2_lines_in.all 12,186,179 l2_rqsts.all_code_rd 1.001739493 seconds time elapsed After: # perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}':W -a sleep 1 Performance counter stats for 'system wide': 543,323,909 branches (80.01%) 27,100,512 branch-misses # 4.99% of all branches (80.02%) 50,402,905 l1d.replacement (80.03%) 67,385,892 l2_lines_in.all (80.01%) 21,352,885 l2_rqsts.all_code_rd (79.94%) 1.001086658 seconds time elapsed # Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/20170831194036.30146-2-andi@firstfloor.org [ Add a "'perf stat' only, for now" comment in the man page, suggested by Jiri ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:26 +00:00
counter->leader != counter &&
counter->weak_group) {
counter = perf_evsel__reset_weak_group(counter);
goto try_again;
}
/*
* PPC returns ENXIO for HW counters until 2.6.37
* (behavior changed with commit b0a873e).
*/
if (errno == EINVAL || errno == ENOSYS ||
errno == ENOENT || errno == EOPNOTSUPP ||
errno == ENXIO) {
if (verbose > 0)
ui__warning("%s event is not supported by the kernel.\n",
perf_evsel__name(counter));
perf stat: clarify unsupported events from uncounted events perf stat continues running even if the event list contains counters that are not supported. The resulting output then contains <not counted> for those events which gets confusing as to which events are supported, but not counted and which are not supported. Before: perf stat -ddd -- sleep 1 Performance counter stats for 'sleep 1': 0.571283 task-clock # 0.001 CPUs utilized 1 context-switches # 0.002 M/sec 0 CPU-migrations # 0.000 M/sec 157 page-faults # 0.275 M/sec 1,037,707 cycles # 1.816 GHz <not counted> stalled-cycles-frontend <not counted> stalled-cycles-backend 654,499 instructions # 0.63 insns per cycle 136,129 branches # 238.286 M/sec <not counted> branch-misses <not counted> L1-dcache-loads <not counted> L1-dcache-load-misses <not counted> LLC-loads <not counted> LLC-load-misses <not counted> L1-icache-loads <not counted> L1-icache-load-misses <not counted> dTLB-loads <not counted> dTLB-load-misses <not counted> iTLB-loads <not counted> iTLB-load-misses <not counted> L1-dcache-prefetches <not counted> L1-dcache-prefetch-misses 1.001004836 seconds time elapsed After: perf stat -ddd -- sleep 1 Performance counter stats for 'sleep 1': 1.350326 task-clock # 0.001 CPUs utilized 2 context-switches # 0.001 M/sec 0 CPU-migrations # 0.000 M/sec 157 page-faults # 0.116 M/sec 11,986 cycles # 0.009 GHz <not supported> stalled-cycles-frontend <not supported> stalled-cycles-backend 496,986 instructions # 41.46 insns per cycle 138,065 branches # 102.246 M/sec 7,245 branch-misses # 5.25% of all branches <not counted> L1-dcache-loads <not counted> L1-dcache-load-misses <not counted> LLC-loads <not counted> LLC-load-misses <not counted> L1-icache-loads <not counted> L1-icache-load-misses <not counted> dTLB-loads <not counted> dTLB-load-misses <not counted> iTLB-loads <not counted> iTLB-load-misses <not counted> L1-dcache-prefetches <not supported> L1-dcache-prefetch-misses 1.002397333 seconds time elapsed v1->v2: changed supported type from int to bool v2->v3 fixed vertical alignment of new struct element Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1306767359-13221-1-git-send-email-dsahern@gmail.com Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-30 14:55:59 +00:00
counter->supported = false;
if ((counter->leader != counter) ||
!(counter->leader->nr_members > 1))
continue;
} else if (perf_evsel__fallback(counter, errno, msg, sizeof(msg))) {
if (verbose > 0)
ui__warning("%s\n", msg);
goto try_again;
}
perf_evsel__open_strerror(counter, &target,
errno, msg, sizeof(msg));
ui__error("%s\n", msg);
if (child_pid != -1)
kill(child_pid, SIGTERM);
return -1;
}
perf stat: clarify unsupported events from uncounted events perf stat continues running even if the event list contains counters that are not supported. The resulting output then contains <not counted> for those events which gets confusing as to which events are supported, but not counted and which are not supported. Before: perf stat -ddd -- sleep 1 Performance counter stats for 'sleep 1': 0.571283 task-clock # 0.001 CPUs utilized 1 context-switches # 0.002 M/sec 0 CPU-migrations # 0.000 M/sec 157 page-faults # 0.275 M/sec 1,037,707 cycles # 1.816 GHz <not counted> stalled-cycles-frontend <not counted> stalled-cycles-backend 654,499 instructions # 0.63 insns per cycle 136,129 branches # 238.286 M/sec <not counted> branch-misses <not counted> L1-dcache-loads <not counted> L1-dcache-load-misses <not counted> LLC-loads <not counted> LLC-load-misses <not counted> L1-icache-loads <not counted> L1-icache-load-misses <not counted> dTLB-loads <not counted> dTLB-load-misses <not counted> iTLB-loads <not counted> iTLB-load-misses <not counted> L1-dcache-prefetches <not counted> L1-dcache-prefetch-misses 1.001004836 seconds time elapsed After: perf stat -ddd -- sleep 1 Performance counter stats for 'sleep 1': 1.350326 task-clock # 0.001 CPUs utilized 2 context-switches # 0.001 M/sec 0 CPU-migrations # 0.000 M/sec 157 page-faults # 0.116 M/sec 11,986 cycles # 0.009 GHz <not supported> stalled-cycles-frontend <not supported> stalled-cycles-backend 496,986 instructions # 41.46 insns per cycle 138,065 branches # 102.246 M/sec 7,245 branch-misses # 5.25% of all branches <not counted> L1-dcache-loads <not counted> L1-dcache-load-misses <not counted> LLC-loads <not counted> LLC-load-misses <not counted> L1-icache-loads <not counted> L1-icache-load-misses <not counted> dTLB-loads <not counted> dTLB-load-misses <not counted> iTLB-loads <not counted> iTLB-load-misses <not counted> L1-dcache-prefetches <not supported> L1-dcache-prefetch-misses 1.002397333 seconds time elapsed v1->v2: changed supported type from int to bool v2->v3 fixed vertical alignment of new struct element Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1306767359-13221-1-git-send-email-dsahern@gmail.com Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-30 14:55:59 +00:00
counter->supported = true;
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
l = strlen(counter->unit);
if (l > unit_width)
unit_width = l;
if (perf_evsel__should_store_id(counter) &&
store_counter_ids(counter))
return -1;
}
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
if (perf_evlist__apply_filters(evsel_list, &counter)) {
pr_err("failed to set filter \"%s\" on event %s with %d (%s)\n",
counter->filter, perf_evsel__name(counter), errno,
str_error_r(errno, msg, sizeof(msg)));
return -1;
}
if (perf_evlist__apply_drv_configs(evsel_list, &counter, &err_term)) {
pr_err("failed to set config \"%s\" on event %s with %d (%s)\n",
err_term->val.drv_cfg, perf_evsel__name(counter), errno,
str_error_r(errno, msg, sizeof(msg)));
return -1;
}
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
if (STAT_RECORD) {
int err, fd = perf_data__fd(&perf_stat.data);
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
if (is_pipe) {
err = perf_header__write_pipe(perf_data__fd(&perf_stat.data));
} else {
err = perf_session__write_header(perf_stat.session, evsel_list,
fd, false);
}
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
if (err < 0)
return err;
err = perf_stat_synthesize_config(is_pipe);
if (err < 0)
return err;
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
}
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
/*
* Enable counters and exec the command:
*/
t0 = rdclock();
2013-01-29 11:47:44 +00:00
clock_gettime(CLOCK_MONOTONIC, &ref_time);
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
if (forks) {
perf_evlist__start_workload(evsel_list);
enable_counters();
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
if (interval || timeout) {
2013-01-29 11:47:44 +00:00
while (!waitpid(child_pid, &status, WNOHANG)) {
nanosleep(&ts, NULL);
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
if (timeout)
break;
process_interval();
perf stat: Add support to print counts for fixed times Introduce a new option to print counts for fixed number of times and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat -I 1000 --interval-count 2 -e cycles -a # time counts unit events 1.002827089 93,884,870 cycles 2.004231506 56,573,446 cycles We can just print the counts for several times with this newly introduced option. The usage of it is a little like 'vmstat', and it should be used together with "-I" option. $ vmstat -n 1 2 procs ---------memory-------------- --swap- ----io-- -system-- ------cpu--- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 78270544 547484 51732076 0 0 0 20 1 1 1 0 99 0 0 0 0 0 78270512 547484 51732080 0 0 0 16 477 1555 0 0 100 0 0 Changes since v3: - merge interval_count check and times check to one line. - fix the wrong indent in stat.h - use stat_config.times instead of 'times' in cmd_stat function. Changes since v2: - none. Changes since v1: - change the name of the new option "times-print" to "interval-count". - keep the new option interval specifically. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-2-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:22 +00:00
if (interval_count && !(--times))
break;
2013-01-29 11:47:44 +00:00
}
}
perf stat: Wait for the correct child When packaging the perf userland application into an AppImage, the wait() call in perf stat returned too early. It turned out that some other child process exited, but not the one perf stat launched: $ sudo strace -e fork,execve,clone,wait4 -f ./perf-x86_64.AppImage stat sleep 1 execve("./perf-git.3a73b7f9-x86_64.AppImage", ["./perf-git.3a73b7f9-x86_64.AppIm"..., "stat", "sleep", "1"], 0x7ffec1bbf050 /* 18 vars */) = 0 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f6a6e7efe50) = 3912 strace: Process 3912 attached [pid 3912] clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f6a6e7efe50) = 3914 strace: Process 3914 attached [pid 3912] +++ exited with 0 +++ [pid 3911] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=3912, si_uid=0, si_status=0, si_utime=0, si_stime=0} --- [pid 3914] clone(strace: Process 3915 attached child_stack=0x7f6a6d9fefb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f6a6d9ff9d0, tls=0x7f6a6d9ff700, child_tidptr=0x7f6a6d9ff9d0) = 3915 [pid 3911] execve("/tmp/.mount_perf-g6VYMpl/AppRun", ["./perf-git.3a73b7f9-x86_64.AppIm"..., "stat", "sleep", "1"], 0x14aab70 /* 21 vars */) = 0 [pid 3911] clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f4ae113c4d0) = 3916 strace: Process 3916 attached [pid 3911] wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 3912 [pid 3916] execve("/usr/libexec/perf-core/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/tmp/./sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/home/milian/.bin/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/usr/lib/icecream/libexec/icecc/bin/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/ssd2/milian/projects/compiled/other/bin/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/home/milian/.bin/kf5/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/ssd2/milian/projects/compiled/kf5/bin/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/home/milian/projects/compiled/other/bin/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/home/milian/projects/compiled/kf5/bin/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/usr/local/sbin/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/usr/local/bin/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */) = -1 ENOENT (No such file or directory) [pid 3916] execve("/usr/bin/sleep", ["sleep", "1"], 0x27d3650 /* 22 vars */ Performance counter stats for 'sleep 1': <not counted> task-clock <not counted> context-switches <not counted> cpu-migrations <not counted> page-faults <not counted> cycles <not counted> instructions <not counted> branches <not counted> branch-misses 0.000047194 seconds time elapsed [pid 3916] --- SIGTERM {si_signo=SIGTERM, si_code=SI_USER, si_pid=3911, si_uid=0} --- [pid 3916] +++ killed by SIGTERM +++ [pid 3911] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=3916, si_uid=0, si_status=SIGTERM, si_utime=0, si_stime=0} --- [pid 3915] --- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=3914, si_uid=0} --- [pid 3911] +++ exited with 0 +++ [pid 3915] --- SIGHUP {si_signo=SIGHUP, si_code=SI_USER, si_pid=3914, si_uid=0} --- [pid 3915] +++ exited with 0 +++ +++ exited with 0 +++ This patch uses waitpid instead to ensure the call waits for the debuggee application launched by 'perf stat'. This fixes 'perf stat' when launched from an AppImage: $ ./perf-x86_64.AppImage stat sleep 1 Performance counter stats for 'sleep 1': 0.357235 task-clock (msec) # 0.000 CPUs utilized 1 context-switches # 0.003 M/sec 0 cpu-migrations # 0.000 K/sec 50 page-faults # 0.140 M/sec 1269602 cycles # 3.554 GHz 654278 instructions # 0.52 insn per cycle 129963 branches # 363.803 M/sec 7082 branch-misses # 5.45% of all branches 1.000633420 seconds time elapsed Signed-off-by: Milian Wolff <milian.wolff@kdab.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20170912152523.4497-1-milian.wolff@kdab.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-09-12 15:25:23 +00:00
waitpid(child_pid, &status, 0);
if (workload_exec_errno) {
const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
pr_err("Workload failed: %s\n", emsg);
return -1;
}
if (WIFSIGNALED(status))
psignal(WTERMSIG(status), argv[0]);
} else {
enable_counters();
2013-01-29 11:47:44 +00:00
while (!done) {
nanosleep(&ts, NULL);
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
if (timeout)
break;
perf stat: Add support to print counts for fixed times Introduce a new option to print counts for fixed number of times and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat -I 1000 --interval-count 2 -e cycles -a # time counts unit events 1.002827089 93,884,870 cycles 2.004231506 56,573,446 cycles We can just print the counts for several times with this newly introduced option. The usage of it is a little like 'vmstat', and it should be used together with "-I" option. $ vmstat -n 1 2 procs ---------memory-------------- --swap- ----io-- -system-- ------cpu--- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 78270544 547484 51732076 0 0 0 20 1 1 1 0 99 0 0 0 0 0 78270512 547484 51732080 0 0 0 16 477 1555 0 0 100 0 0 Changes since v3: - merge interval_count check and times check to one line. - fix the wrong indent in stat.h - use stat_config.times instead of 'times' in cmd_stat function. Changes since v2: - none. Changes since v1: - change the name of the new option "times-print" to "interval-count". - keep the new option interval specifically. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-2-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:22 +00:00
if (interval) {
process_interval();
perf stat: Add support to print counts for fixed times Introduce a new option to print counts for fixed number of times and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat -I 1000 --interval-count 2 -e cycles -a # time counts unit events 1.002827089 93,884,870 cycles 2.004231506 56,573,446 cycles We can just print the counts for several times with this newly introduced option. The usage of it is a little like 'vmstat', and it should be used together with "-I" option. $ vmstat -n 1 2 procs ---------memory-------------- --swap- ----io-- -system-- ------cpu--- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 78270544 547484 51732076 0 0 0 20 1 1 1 0 99 0 0 0 0 0 78270512 547484 51732080 0 0 0 16 477 1555 0 0 100 0 0 Changes since v3: - merge interval_count check and times check to one line. - fix the wrong indent in stat.h - use stat_config.times instead of 'times' in cmd_stat function. Changes since v2: - none. Changes since v1: - change the name of the new option "times-print" to "interval-count". - keep the new option interval specifically. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-2-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:22 +00:00
if (interval_count && !(--times))
break;
}
2013-01-29 11:47:44 +00:00
}
}
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
perf stat: Avoid skew when reading events When we don't have a tracee (i.e. we're attaching to a task or CPU), counters can still be running after our workload finishes, and can still be running as we read their values. As we read events one-by-one, there can be arbitrary skew between values of events, even within a group. This means that ratios within an event group are not reliable. This skew can be seen if measuring a group of identical events, e.g: # perf stat -a -C0 -e '{cycles,cycles}' sleep 1 To avoid this, we must stop groups from counting before we read the values of any constituent events. This patch adds and makes use of a new disable_counters() helper, which disables group leaders (and thus each group as a whole). This mirrors the use of enable_counters() for starting event groups in the absence of a tracee. Closing a group leader splits the group, and without a disabled group leader the newly split events will begin counting. Thus to ensure counts are reliable we must defer closing group leaders until all counts have been read. To do so this patch removes the event closing logic from the read_counters() helper, explicitly closes the events using perf_evlist__close(), which also aids legibility. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
disable_counters();
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
t1 = rdclock();
update_stats(&walltime_nsecs_stats, t1 - t0);
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
perf stat: Avoid skew when reading events When we don't have a tracee (i.e. we're attaching to a task or CPU), counters can still be running after our workload finishes, and can still be running as we read their values. As we read events one-by-one, there can be arbitrary skew between values of events, even within a group. This means that ratios within an event group are not reliable. This skew can be seen if measuring a group of identical events, e.g: # perf stat -a -C0 -e '{cycles,cycles}' sleep 1 To avoid this, we must stop groups from counting before we read the values of any constituent events. This patch adds and makes use of a new disable_counters() helper, which disables group leaders (and thus each group as a whole). This mirrors the use of enable_counters() for starting event groups in the absence of a tracee. Closing a group leader splits the group, and without a disabled group leader the newly split events will begin counting. Thus to ensure counts are reliable we must defer closing group leaders until all counts have been read. To do so this patch removes the event closing logic from the read_counters() helper, explicitly closes the events using perf_evlist__close(), which also aids legibility. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
/*
* Closing a group leader splits the group, and as we only disable
* group leaders, results in remaining events becoming enabled. To
* avoid arbitrary skew, we must read all counters before closing any
* group leaders.
*/
read_counters();
perf_evlist__close(evsel_list);
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
return WEXITSTATUS(status);
}
static int run_perf_stat(int argc, const char **argv)
{
int ret;
if (pre_cmd) {
ret = system(pre_cmd);
if (ret)
return ret;
}
if (sync_run)
sync();
ret = __run_perf_stat(argc, argv);
if (ret)
return ret;
if (post_cmd) {
ret = system(post_cmd);
if (ret)
return ret;
}
return ret;
}
static void print_running(u64 run, u64 ena)
{
if (csv_output) {
fprintf(stat_config.output, "%s%" PRIu64 "%s%.2f",
csv_sep,
run,
csv_sep,
ena ? 100.0 * run / ena : 100.0);
} else if (run != ena) {
fprintf(stat_config.output, " (%.2f%%)", 100.0 * run / ena);
}
}
static void print_noise_pct(double total, double avg)
{
double pct = rel_stddev_stats(total, avg);
if (csv_output)
fprintf(stat_config.output, "%s%.2f%%", csv_sep, pct);
else if (pct)
fprintf(stat_config.output, " ( +-%6.2f%% )", pct);
}
static void print_noise(struct perf_evsel *evsel, double avg)
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
{
struct perf_stat_evsel *ps;
if (run_count == 1)
return;
ps = evsel->stats;
print_noise_pct(stddev_stats(&ps->res_stats[0]), avg);
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
}
static void aggr_printout(struct perf_evsel *evsel, int id, int nr)
{
switch (stat_config.aggr_mode) {
case AGGR_CORE:
fprintf(stat_config.output, "S%d-C%*d%s%*d%s",
cpu_map__id_to_socket(id),
csv_output ? 0 : -8,
cpu_map__id_to_cpu(id),
csv_sep,
csv_output ? 0 : 4,
nr,
csv_sep);
break;
case AGGR_SOCKET:
fprintf(stat_config.output, "S%*d%s%*d%s",
csv_output ? 0 : -5,
id,
csv_sep,
csv_output ? 0 : 4,
nr,
csv_sep);
break;
case AGGR_NONE:
fprintf(stat_config.output, "CPU%*d%s",
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
csv_output ? 0 : -4,
perf_evsel__cpus(evsel)->map[id], csv_sep);
break;
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
case AGGR_THREAD:
fprintf(stat_config.output, "%*s-%*d%s",
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
csv_output ? 0 : 16,
thread_map__comm(evsel->threads, id),
csv_output ? 0 : -8,
thread_map__pid(evsel->threads, id),
csv_sep);
break;
case AGGR_GLOBAL:
case AGGR_UNSET:
default:
break;
}
}
struct outstate {
FILE *fh;
bool newline;
const char *prefix;
perf stat: Implement CSV metrics output Now support CSV output for metrics. With the new output callbacks this is relatively straight forward by creating new callbacks. This allows to easily plot metrics from CSV files. The new line callback needs to know the number of fields to skip them correctly Example output before: % perf stat -x, true 0.200687,,task-clock,200687,100.00 0,,context-switches,200687,100.00 0,,cpu-migrations,200687,100.00 40,,page-faults,200687,100.00 730871,,cycles,203601,100.00 551056,,stalled-cycles-frontend,203601,100.00 <not supported>,,stalled-cycles-backend,0,100.00 385523,,instructions,203601,100.00 78028,,branches,203601,100.00 3946,,branch-misses,203601,100.00 After: % perf stat -x, true .502457,,task-clock,502457,100.00,0.485,CPUs utilized 0,,context-switches,502457,100.00,0.000,K/sec 0,,cpu-migrations,502457,100.00,0.000,K/sec 45,,page-faults,502457,100.00,0.090,M/sec 644692,,cycles,509102,100.00,1.283,GHz 423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle <not supported>,,stalled-cycles-backend,0,100.00,,,, 492701,,instructions,509102,100.00,0.76,insn per cycle ,,,,,0.86,stalled cycles per insn 97767,,branches,509102,100.00,194.578,M/sec 4788,,branch-misses,509102,100.00,4.90,of all branches or easier readable $ perf stat -x, -o x.csv true $ column -s, -t x.csv 0.490635 task-clock 490635 100.00 0.489 CPUs utilized 0 context-switches 490635 100.00 0.000 K/sec 0 cpu-migrations 490635 100.00 0.000 K/sec 45 page-faults 490635 100.00 0.092 M/sec 629080 cycles 497698 100.00 1.282 GHz 409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle <not supported> stalled-cycles-backend 0 100.00 491424 instructions 497698 100.00 0.78 insn per cycle 0.83 stalled cycles per insn 97278 branches 497698 100.00 198.270 M/sec 4569 branch-misses 497698 100.00 4.70 of all branches Two new fields are added: metric value and metric name. v2: Split out function argument changes v3: Reenable metrics for real. v4: Fix wrong hunk from refactoring. v5: Remove extra "noise" printing (Jiri), but add it to the not counted case. Print empty metrics for not counted. v6: Avoid outputting metric on empty format. v7: Print metric at the end v8: Remove extra run, ena fields v9: Avoid extra new line for unsupported counters Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
int nfields;
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
int id, nr;
struct perf_evsel *evsel;
};
#define METRIC_LEN 35
static void new_line_std(void *ctx)
{
struct outstate *os = ctx;
os->newline = true;
}
static void do_new_line_std(struct outstate *os)
{
fputc('\n', os->fh);
fputs(os->prefix, os->fh);
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
aggr_printout(os->evsel, os->id, os->nr);
if (stat_config.aggr_mode == AGGR_NONE)
fprintf(os->fh, " ");
fprintf(os->fh, " ");
}
static void print_metric_std(void *ctx, const char *color, const char *fmt,
const char *unit, double val)
{
struct outstate *os = ctx;
FILE *out = os->fh;
int n;
bool newline = os->newline;
os->newline = false;
if (unit == NULL || fmt == NULL) {
fprintf(out, "%-*s", METRIC_LEN, "");
return;
}
if (newline)
do_new_line_std(os);
n = fprintf(out, " # ");
if (color)
n += color_fprintf(out, color, fmt, val);
else
n += fprintf(out, fmt, val);
fprintf(out, " %-*s", METRIC_LEN - n - 1, unit);
}
perf stat: Implement CSV metrics output Now support CSV output for metrics. With the new output callbacks this is relatively straight forward by creating new callbacks. This allows to easily plot metrics from CSV files. The new line callback needs to know the number of fields to skip them correctly Example output before: % perf stat -x, true 0.200687,,task-clock,200687,100.00 0,,context-switches,200687,100.00 0,,cpu-migrations,200687,100.00 40,,page-faults,200687,100.00 730871,,cycles,203601,100.00 551056,,stalled-cycles-frontend,203601,100.00 <not supported>,,stalled-cycles-backend,0,100.00 385523,,instructions,203601,100.00 78028,,branches,203601,100.00 3946,,branch-misses,203601,100.00 After: % perf stat -x, true .502457,,task-clock,502457,100.00,0.485,CPUs utilized 0,,context-switches,502457,100.00,0.000,K/sec 0,,cpu-migrations,502457,100.00,0.000,K/sec 45,,page-faults,502457,100.00,0.090,M/sec 644692,,cycles,509102,100.00,1.283,GHz 423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle <not supported>,,stalled-cycles-backend,0,100.00,,,, 492701,,instructions,509102,100.00,0.76,insn per cycle ,,,,,0.86,stalled cycles per insn 97767,,branches,509102,100.00,194.578,M/sec 4788,,branch-misses,509102,100.00,4.90,of all branches or easier readable $ perf stat -x, -o x.csv true $ column -s, -t x.csv 0.490635 task-clock 490635 100.00 0.489 CPUs utilized 0 context-switches 490635 100.00 0.000 K/sec 0 cpu-migrations 490635 100.00 0.000 K/sec 45 page-faults 490635 100.00 0.092 M/sec 629080 cycles 497698 100.00 1.282 GHz 409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle <not supported> stalled-cycles-backend 0 100.00 491424 instructions 497698 100.00 0.78 insn per cycle 0.83 stalled cycles per insn 97278 branches 497698 100.00 198.270 M/sec 4569 branch-misses 497698 100.00 4.70 of all branches Two new fields are added: metric value and metric name. v2: Split out function argument changes v3: Reenable metrics for real. v4: Fix wrong hunk from refactoring. v5: Remove extra "noise" printing (Jiri), but add it to the not counted case. Print empty metrics for not counted. v6: Avoid outputting metric on empty format. v7: Print metric at the end v8: Remove extra run, ena fields v9: Avoid extra new line for unsupported counters Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
static void new_line_csv(void *ctx)
{
struct outstate *os = ctx;
int i;
fputc('\n', os->fh);
if (os->prefix)
fprintf(os->fh, "%s%s", os->prefix, csv_sep);
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
aggr_printout(os->evsel, os->id, os->nr);
perf stat: Implement CSV metrics output Now support CSV output for metrics. With the new output callbacks this is relatively straight forward by creating new callbacks. This allows to easily plot metrics from CSV files. The new line callback needs to know the number of fields to skip them correctly Example output before: % perf stat -x, true 0.200687,,task-clock,200687,100.00 0,,context-switches,200687,100.00 0,,cpu-migrations,200687,100.00 40,,page-faults,200687,100.00 730871,,cycles,203601,100.00 551056,,stalled-cycles-frontend,203601,100.00 <not supported>,,stalled-cycles-backend,0,100.00 385523,,instructions,203601,100.00 78028,,branches,203601,100.00 3946,,branch-misses,203601,100.00 After: % perf stat -x, true .502457,,task-clock,502457,100.00,0.485,CPUs utilized 0,,context-switches,502457,100.00,0.000,K/sec 0,,cpu-migrations,502457,100.00,0.000,K/sec 45,,page-faults,502457,100.00,0.090,M/sec 644692,,cycles,509102,100.00,1.283,GHz 423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle <not supported>,,stalled-cycles-backend,0,100.00,,,, 492701,,instructions,509102,100.00,0.76,insn per cycle ,,,,,0.86,stalled cycles per insn 97767,,branches,509102,100.00,194.578,M/sec 4788,,branch-misses,509102,100.00,4.90,of all branches or easier readable $ perf stat -x, -o x.csv true $ column -s, -t x.csv 0.490635 task-clock 490635 100.00 0.489 CPUs utilized 0 context-switches 490635 100.00 0.000 K/sec 0 cpu-migrations 490635 100.00 0.000 K/sec 45 page-faults 490635 100.00 0.092 M/sec 629080 cycles 497698 100.00 1.282 GHz 409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle <not supported> stalled-cycles-backend 0 100.00 491424 instructions 497698 100.00 0.78 insn per cycle 0.83 stalled cycles per insn 97278 branches 497698 100.00 198.270 M/sec 4569 branch-misses 497698 100.00 4.70 of all branches Two new fields are added: metric value and metric name. v2: Split out function argument changes v3: Reenable metrics for real. v4: Fix wrong hunk from refactoring. v5: Remove extra "noise" printing (Jiri), but add it to the not counted case. Print empty metrics for not counted. v6: Avoid outputting metric on empty format. v7: Print metric at the end v8: Remove extra run, ena fields v9: Avoid extra new line for unsupported counters Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
for (i = 0; i < os->nfields; i++)
fputs(csv_sep, os->fh);
}
static void print_metric_csv(void *ctx,
const char *color __maybe_unused,
const char *fmt, const char *unit, double val)
{
struct outstate *os = ctx;
FILE *out = os->fh;
char buf[64], *vals, *ends;
if (unit == NULL || fmt == NULL) {
fprintf(out, "%s%s%s%s", csv_sep, csv_sep, csv_sep, csv_sep);
return;
}
snprintf(buf, sizeof(buf), fmt, val);
ends = vals = ltrim(buf);
perf stat: Implement CSV metrics output Now support CSV output for metrics. With the new output callbacks this is relatively straight forward by creating new callbacks. This allows to easily plot metrics from CSV files. The new line callback needs to know the number of fields to skip them correctly Example output before: % perf stat -x, true 0.200687,,task-clock,200687,100.00 0,,context-switches,200687,100.00 0,,cpu-migrations,200687,100.00 40,,page-faults,200687,100.00 730871,,cycles,203601,100.00 551056,,stalled-cycles-frontend,203601,100.00 <not supported>,,stalled-cycles-backend,0,100.00 385523,,instructions,203601,100.00 78028,,branches,203601,100.00 3946,,branch-misses,203601,100.00 After: % perf stat -x, true .502457,,task-clock,502457,100.00,0.485,CPUs utilized 0,,context-switches,502457,100.00,0.000,K/sec 0,,cpu-migrations,502457,100.00,0.000,K/sec 45,,page-faults,502457,100.00,0.090,M/sec 644692,,cycles,509102,100.00,1.283,GHz 423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle <not supported>,,stalled-cycles-backend,0,100.00,,,, 492701,,instructions,509102,100.00,0.76,insn per cycle ,,,,,0.86,stalled cycles per insn 97767,,branches,509102,100.00,194.578,M/sec 4788,,branch-misses,509102,100.00,4.90,of all branches or easier readable $ perf stat -x, -o x.csv true $ column -s, -t x.csv 0.490635 task-clock 490635 100.00 0.489 CPUs utilized 0 context-switches 490635 100.00 0.000 K/sec 0 cpu-migrations 490635 100.00 0.000 K/sec 45 page-faults 490635 100.00 0.092 M/sec 629080 cycles 497698 100.00 1.282 GHz 409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle <not supported> stalled-cycles-backend 0 100.00 491424 instructions 497698 100.00 0.78 insn per cycle 0.83 stalled cycles per insn 97278 branches 497698 100.00 198.270 M/sec 4569 branch-misses 497698 100.00 4.70 of all branches Two new fields are added: metric value and metric name. v2: Split out function argument changes v3: Reenable metrics for real. v4: Fix wrong hunk from refactoring. v5: Remove extra "noise" printing (Jiri), but add it to the not counted case. Print empty metrics for not counted. v6: Avoid outputting metric on empty format. v7: Print metric at the end v8: Remove extra run, ena fields v9: Avoid extra new line for unsupported counters Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
while (isdigit(*ends) || *ends == '.')
ends++;
*ends = 0;
while (isspace(*unit))
unit++;
fprintf(out, "%s%s%s%s", csv_sep, vals, csv_sep, unit);
}
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
#define METRIC_ONLY_LEN 20
/* Filter out some columns that don't work well in metrics only mode */
static bool valid_only_metric(const char *unit)
{
if (!unit)
return false;
if (strstr(unit, "/sec") ||
strstr(unit, "hz") ||
strstr(unit, "Hz") ||
strstr(unit, "CPUs utilized"))
return false;
return true;
}
static const char *fixunit(char *buf, struct perf_evsel *evsel,
const char *unit)
{
if (!strncmp(unit, "of all", 6)) {
snprintf(buf, 1024, "%s %s", perf_evsel__name(evsel),
unit);
return buf;
}
return unit;
}
static void print_metric_only(void *ctx, const char *color, const char *fmt,
const char *unit, double val)
{
struct outstate *os = ctx;
FILE *out = os->fh;
int n;
char buf[1024];
unsigned mlen = METRIC_ONLY_LEN;
if (!valid_only_metric(unit))
return;
unit = fixunit(buf, os->evsel, unit);
if (color)
n = color_fprintf(out, color, fmt, val);
else
n = fprintf(out, fmt, val);
if (n > METRIC_ONLY_LEN)
n = METRIC_ONLY_LEN;
if (mlen < strlen(unit))
mlen = strlen(unit) + 1;
fprintf(out, "%*s", mlen - n, "");
}
static void print_metric_only_csv(void *ctx, const char *color __maybe_unused,
const char *fmt,
const char *unit, double val)
{
struct outstate *os = ctx;
FILE *out = os->fh;
char buf[64], *vals, *ends;
char tbuf[1024];
if (!valid_only_metric(unit))
return;
unit = fixunit(tbuf, os->evsel, unit);
snprintf(buf, sizeof buf, fmt, val);
ends = vals = ltrim(buf);
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
while (isdigit(*ends) || *ends == '.')
ends++;
*ends = 0;
fprintf(out, "%s%s", vals, csv_sep);
}
static void new_line_metric(void *ctx __maybe_unused)
{
}
static void print_metric_header(void *ctx, const char *color __maybe_unused,
const char *fmt __maybe_unused,
const char *unit, double val __maybe_unused)
{
struct outstate *os = ctx;
char tbuf[1024];
if (!valid_only_metric(unit))
return;
unit = fixunit(tbuf, os->evsel, unit);
if (csv_output)
fprintf(os->fh, "%s%s", unit, csv_sep);
else
fprintf(os->fh, "%-*s ", METRIC_ONLY_LEN, unit);
}
static void nsec_printout(int id, int nr, struct perf_evsel *evsel, double avg)
{
FILE *output = stat_config.output;
double msecs = avg / NSEC_PER_MSEC;
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
const char *fmt_v, *fmt_n;
char name[25];
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
fmt_v = csv_output ? "%.6f%s" : "%18.6f%s";
fmt_n = csv_output ? "%s" : "%-25s";
aggr_printout(evsel, id, nr);
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
scnprintf(name, sizeof(name), "%s%s",
perf_evsel__name(evsel), csv_output ? "" : " (msec)");
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
fprintf(output, fmt_v, msecs, csv_sep);
if (csv_output)
fprintf(output, "%s%s", evsel->unit, csv_sep);
else
fprintf(output, "%-*s%s", unit_width, evsel->unit, csv_sep);
fprintf(output, fmt_n, name);
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
if (evsel->cgrp)
fprintf(output, "%s%s", csv_sep, evsel->cgrp->name);
}
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
static int first_shadow_cpu(struct perf_evsel *evsel, int id)
{
int i;
if (!aggr_get_id)
return 0;
if (stat_config.aggr_mode == AGGR_NONE)
return id;
if (stat_config.aggr_mode == AGGR_GLOBAL)
return 0;
for (i = 0; i < perf_evsel__nr_cpus(evsel); i++) {
int cpu2 = perf_evsel__cpus(evsel)->map[i];
if (aggr_get_id(evsel_list->cpus, cpu2) == id)
return cpu2;
}
return 0;
}
static void abs_printout(int id, int nr, struct perf_evsel *evsel, double avg)
{
FILE *output = stat_config.output;
double sc = evsel->scale;
const char *fmt;
if (csv_output) {
fmt = floor(sc) != sc ? "%.2f%s" : "%.0f%s";
} else {
if (big_num)
fmt = floor(sc) != sc ? "%'18.2f%s" : "%'18.0f%s";
else
fmt = floor(sc) != sc ? "%18.2f%s" : "%18.0f%s";
}
aggr_printout(evsel, id, nr);
fprintf(output, fmt, avg, csv_sep);
if (evsel->unit)
fprintf(output, "%-*s%s",
csv_output ? 0 : unit_width,
evsel->unit, csv_sep);
fprintf(output, "%-*s", csv_output ? 0 : 25, perf_evsel__name(evsel));
if (evsel->cgrp)
fprintf(output, "%s%s", csv_sep, evsel->cgrp->name);
}
static void printout(int id, int nr, struct perf_evsel *counter, double uval,
char *prefix, u64 run, u64 ena, double noise,
struct runtime_stat *st)
{
struct perf_stat_output_ctx out;
struct outstate os = {
.fh = stat_config.output,
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
.prefix = prefix ? prefix : "",
.id = id,
.nr = nr,
.evsel = counter,
};
print_metric_t pm = print_metric_std;
void (*nl)(void *);
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (metric_only) {
nl = new_line_metric;
if (csv_output)
pm = print_metric_only_csv;
else
pm = print_metric_only;
} else
nl = new_line_std;
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (csv_output && !metric_only) {
perf stat: Implement CSV metrics output Now support CSV output for metrics. With the new output callbacks this is relatively straight forward by creating new callbacks. This allows to easily plot metrics from CSV files. The new line callback needs to know the number of fields to skip them correctly Example output before: % perf stat -x, true 0.200687,,task-clock,200687,100.00 0,,context-switches,200687,100.00 0,,cpu-migrations,200687,100.00 40,,page-faults,200687,100.00 730871,,cycles,203601,100.00 551056,,stalled-cycles-frontend,203601,100.00 <not supported>,,stalled-cycles-backend,0,100.00 385523,,instructions,203601,100.00 78028,,branches,203601,100.00 3946,,branch-misses,203601,100.00 After: % perf stat -x, true .502457,,task-clock,502457,100.00,0.485,CPUs utilized 0,,context-switches,502457,100.00,0.000,K/sec 0,,cpu-migrations,502457,100.00,0.000,K/sec 45,,page-faults,502457,100.00,0.090,M/sec 644692,,cycles,509102,100.00,1.283,GHz 423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle <not supported>,,stalled-cycles-backend,0,100.00,,,, 492701,,instructions,509102,100.00,0.76,insn per cycle ,,,,,0.86,stalled cycles per insn 97767,,branches,509102,100.00,194.578,M/sec 4788,,branch-misses,509102,100.00,4.90,of all branches or easier readable $ perf stat -x, -o x.csv true $ column -s, -t x.csv 0.490635 task-clock 490635 100.00 0.489 CPUs utilized 0 context-switches 490635 100.00 0.000 K/sec 0 cpu-migrations 490635 100.00 0.000 K/sec 45 page-faults 490635 100.00 0.092 M/sec 629080 cycles 497698 100.00 1.282 GHz 409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle <not supported> stalled-cycles-backend 0 100.00 491424 instructions 497698 100.00 0.78 insn per cycle 0.83 stalled cycles per insn 97278 branches 497698 100.00 198.270 M/sec 4569 branch-misses 497698 100.00 4.70 of all branches Two new fields are added: metric value and metric name. v2: Split out function argument changes v3: Reenable metrics for real. v4: Fix wrong hunk from refactoring. v5: Remove extra "noise" printing (Jiri), but add it to the not counted case. Print empty metrics for not counted. v6: Avoid outputting metric on empty format. v7: Print metric at the end v8: Remove extra run, ena fields v9: Avoid extra new line for unsupported counters Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
static int aggr_fields[] = {
[AGGR_GLOBAL] = 0,
[AGGR_THREAD] = 1,
[AGGR_NONE] = 1,
[AGGR_SOCKET] = 2,
[AGGR_CORE] = 2,
};
pm = print_metric_csv;
nl = new_line_csv;
os.nfields = 3;
os.nfields += aggr_fields[stat_config.aggr_mode];
if (counter->cgrp)
os.nfields++;
}
if (run == 0 || ena == 0 || counter->counts->scaled == -1) {
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (metric_only) {
pm(&os, NULL, "", "", 0);
return;
}
aggr_printout(counter, id, nr);
fprintf(stat_config.output, "%*s%s",
csv_output ? 0 : 18,
counter->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED,
csv_sep);
perf stat: Issue a HW watchdog disable hint When using perf stat on an AMD F15h system with the default hw events attributes, some of the events don't get counted: Performance counter stats for 'sleep 1': 0.749208 task-clock (msec) # 0.001 CPUs utilized 1 context-switches # 0.001 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.072 M/sec 1,122,815 cycles # 1.499 GHz 286,740 stalled-cycles-frontend # 25.54% frontend cycles idle <not counted> stalled-cycles-backend (0.00%) ^^^^^^^^^^^^ <not counted> instructions (0.00%) ^^^^^^^^^^^^ <not counted> branches (0.00%) <not counted> branch-misses (0.00%) 1.001550070 seconds time elapsed The reason is that we have the HW watchdog consuming one PMU counter and when perf tries to schedule 6 events on 6 counters and some of those counters are constrained to only a specific subset of PMCs by the hardware, the event scheduling fails. So issue a hint to disable the HW watchdog around a perf stat session. Committer note: Testing it... # perf stat -d usleep 1 Performance counter stats for 'usleep 1': 1.180203 task-clock (msec) # 0.490 CPUs utilized 1 context-switches # 0.847 K/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.046 M/sec 184,754 cycles # 0.157 GHz 714,553 instructions # 3.87 insn per cycle 154,661 branches # 131.046 M/sec 7,247 branch-misses # 4.69% of all branches 219,984 L1-dcache-loads # 186.395 M/sec 17,600 L1-dcache-load-misses # 8.00% of all L1-dcache hits (90.16%) <not counted> LLC-loads (0.00%) <not counted> LLC-load-misses (0.00%) 0.002406823 seconds time elapsed Some events weren't counted. Try disabling the NMI watchdog: echo 0 > /proc/sys/kernel/nmi_watchdog perf stat ... echo 1 > /proc/sys/kernel/nmi_watchdog # Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Ingo Molnar <mingo@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <rric@kernel.org> Cc: Vince Weaver <vince@deater.net> Link: http://lkml.kernel.org/r/20170211183218.ijnvb5f7ciyuunx4@pd.tnic Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-07 00:40:05 +00:00
if (counter->supported)
print_free_counters_hint = 1;
fprintf(stat_config.output, "%-*s%s",
csv_output ? 0 : unit_width,
counter->unit, csv_sep);
fprintf(stat_config.output, "%*s",
csv_output ? 0 : -25,
perf_evsel__name(counter));
if (counter->cgrp)
fprintf(stat_config.output, "%s%s",
csv_sep, counter->cgrp->name);
perf stat: Implement CSV metrics output Now support CSV output for metrics. With the new output callbacks this is relatively straight forward by creating new callbacks. This allows to easily plot metrics from CSV files. The new line callback needs to know the number of fields to skip them correctly Example output before: % perf stat -x, true 0.200687,,task-clock,200687,100.00 0,,context-switches,200687,100.00 0,,cpu-migrations,200687,100.00 40,,page-faults,200687,100.00 730871,,cycles,203601,100.00 551056,,stalled-cycles-frontend,203601,100.00 <not supported>,,stalled-cycles-backend,0,100.00 385523,,instructions,203601,100.00 78028,,branches,203601,100.00 3946,,branch-misses,203601,100.00 After: % perf stat -x, true .502457,,task-clock,502457,100.00,0.485,CPUs utilized 0,,context-switches,502457,100.00,0.000,K/sec 0,,cpu-migrations,502457,100.00,0.000,K/sec 45,,page-faults,502457,100.00,0.090,M/sec 644692,,cycles,509102,100.00,1.283,GHz 423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle <not supported>,,stalled-cycles-backend,0,100.00,,,, 492701,,instructions,509102,100.00,0.76,insn per cycle ,,,,,0.86,stalled cycles per insn 97767,,branches,509102,100.00,194.578,M/sec 4788,,branch-misses,509102,100.00,4.90,of all branches or easier readable $ perf stat -x, -o x.csv true $ column -s, -t x.csv 0.490635 task-clock 490635 100.00 0.489 CPUs utilized 0 context-switches 490635 100.00 0.000 K/sec 0 cpu-migrations 490635 100.00 0.000 K/sec 45 page-faults 490635 100.00 0.092 M/sec 629080 cycles 497698 100.00 1.282 GHz 409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle <not supported> stalled-cycles-backend 0 100.00 491424 instructions 497698 100.00 0.78 insn per cycle 0.83 stalled cycles per insn 97278 branches 497698 100.00 198.270 M/sec 4569 branch-misses 497698 100.00 4.70 of all branches Two new fields are added: metric value and metric name. v2: Split out function argument changes v3: Reenable metrics for real. v4: Fix wrong hunk from refactoring. v5: Remove extra "noise" printing (Jiri), but add it to the not counted case. Print empty metrics for not counted. v6: Avoid outputting metric on empty format. v7: Print metric at the end v8: Remove extra run, ena fields v9: Avoid extra new line for unsupported counters Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
if (!csv_output)
pm(&os, NULL, NULL, "", 0);
print_noise(counter, noise);
print_running(run, ena);
perf stat: Implement CSV metrics output Now support CSV output for metrics. With the new output callbacks this is relatively straight forward by creating new callbacks. This allows to easily plot metrics from CSV files. The new line callback needs to know the number of fields to skip them correctly Example output before: % perf stat -x, true 0.200687,,task-clock,200687,100.00 0,,context-switches,200687,100.00 0,,cpu-migrations,200687,100.00 40,,page-faults,200687,100.00 730871,,cycles,203601,100.00 551056,,stalled-cycles-frontend,203601,100.00 <not supported>,,stalled-cycles-backend,0,100.00 385523,,instructions,203601,100.00 78028,,branches,203601,100.00 3946,,branch-misses,203601,100.00 After: % perf stat -x, true .502457,,task-clock,502457,100.00,0.485,CPUs utilized 0,,context-switches,502457,100.00,0.000,K/sec 0,,cpu-migrations,502457,100.00,0.000,K/sec 45,,page-faults,502457,100.00,0.090,M/sec 644692,,cycles,509102,100.00,1.283,GHz 423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle <not supported>,,stalled-cycles-backend,0,100.00,,,, 492701,,instructions,509102,100.00,0.76,insn per cycle ,,,,,0.86,stalled cycles per insn 97767,,branches,509102,100.00,194.578,M/sec 4788,,branch-misses,509102,100.00,4.90,of all branches or easier readable $ perf stat -x, -o x.csv true $ column -s, -t x.csv 0.490635 task-clock 490635 100.00 0.489 CPUs utilized 0 context-switches 490635 100.00 0.000 K/sec 0 cpu-migrations 490635 100.00 0.000 K/sec 45 page-faults 490635 100.00 0.092 M/sec 629080 cycles 497698 100.00 1.282 GHz 409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle <not supported> stalled-cycles-backend 0 100.00 491424 instructions 497698 100.00 0.78 insn per cycle 0.83 stalled cycles per insn 97278 branches 497698 100.00 198.270 M/sec 4569 branch-misses 497698 100.00 4.70 of all branches Two new fields are added: metric value and metric name. v2: Split out function argument changes v3: Reenable metrics for real. v4: Fix wrong hunk from refactoring. v5: Remove extra "noise" printing (Jiri), but add it to the not counted case. Print empty metrics for not counted. v6: Avoid outputting metric on empty format. v7: Print metric at the end v8: Remove extra run, ena fields v9: Avoid extra new line for unsupported counters Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
if (csv_output)
pm(&os, NULL, NULL, "", 0);
return;
}
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (metric_only)
/* nothing */;
else if (nsec_counter(counter))
nsec_printout(id, nr, counter, uval);
else
abs_printout(id, nr, counter, uval);
out.print_metric = pm;
out.new_line = nl;
out.ctx = &os;
perf stat: Output JSON MetricExpr metric Add generic infrastructure to perf stat to output ratios for "MetricExpr" entries in the event lists. Many events are more useful as ratios than in raw form, typically some count in relation to total ticks. Transfer the MetricExpr information from the alias to the evsel. We mark the events that need to be collected for MetricExpr, and also link the events using them with a pointer. The code is careful to always prefer the right event in the same group to minimize multiplexing errors. At the moment only a single relation is supported. Then add a rblist to the stat shadow code that remembers stats based on the cpu and context. Then finally update and retrieve and print these values similarly to the existing hardcoded perf metrics. We use the simple expression parser added earlier to evaluate the expression. Normally we just output the result without further commentary, but for --metric-only this would lead to empty columns. So for this case use the original event as description. There is no attempt to automatically add the MetricExpr event, if it is missing, however we suggest it to the user, because the user tool doesn't have enough information to reliably construct a group that is guaranteed to schedule. So we leave that to the user. % perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' 1.000147889 800,085,181 unc_p_clockticks 1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6 2.000448381 800,218,217 unc_p_clockticks 2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8 3.000639852 800,243,057 unc_p_clockticks 3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3 % perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only # time freq_max_os_cycles % 1.000127077 0.9 2.000301436 0.7 3.000456379 0.0 v2: Change from DivideBy to MetricExpr v3: Use expr__ prefix. Support more than one other event. v4: Update description v5: Only print warning message once for multiple PMUs. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:08 +00:00
out.force_header = false;
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (csv_output && !metric_only) {
perf stat: Implement CSV metrics output Now support CSV output for metrics. With the new output callbacks this is relatively straight forward by creating new callbacks. This allows to easily plot metrics from CSV files. The new line callback needs to know the number of fields to skip them correctly Example output before: % perf stat -x, true 0.200687,,task-clock,200687,100.00 0,,context-switches,200687,100.00 0,,cpu-migrations,200687,100.00 40,,page-faults,200687,100.00 730871,,cycles,203601,100.00 551056,,stalled-cycles-frontend,203601,100.00 <not supported>,,stalled-cycles-backend,0,100.00 385523,,instructions,203601,100.00 78028,,branches,203601,100.00 3946,,branch-misses,203601,100.00 After: % perf stat -x, true .502457,,task-clock,502457,100.00,0.485,CPUs utilized 0,,context-switches,502457,100.00,0.000,K/sec 0,,cpu-migrations,502457,100.00,0.000,K/sec 45,,page-faults,502457,100.00,0.090,M/sec 644692,,cycles,509102,100.00,1.283,GHz 423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle <not supported>,,stalled-cycles-backend,0,100.00,,,, 492701,,instructions,509102,100.00,0.76,insn per cycle ,,,,,0.86,stalled cycles per insn 97767,,branches,509102,100.00,194.578,M/sec 4788,,branch-misses,509102,100.00,4.90,of all branches or easier readable $ perf stat -x, -o x.csv true $ column -s, -t x.csv 0.490635 task-clock 490635 100.00 0.489 CPUs utilized 0 context-switches 490635 100.00 0.000 K/sec 0 cpu-migrations 490635 100.00 0.000 K/sec 45 page-faults 490635 100.00 0.092 M/sec 629080 cycles 497698 100.00 1.282 GHz 409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle <not supported> stalled-cycles-backend 0 100.00 491424 instructions 497698 100.00 0.78 insn per cycle 0.83 stalled cycles per insn 97278 branches 497698 100.00 198.270 M/sec 4569 branch-misses 497698 100.00 4.70 of all branches Two new fields are added: metric value and metric name. v2: Split out function argument changes v3: Reenable metrics for real. v4: Fix wrong hunk from refactoring. v5: Remove extra "noise" printing (Jiri), but add it to the not counted case. Print empty metrics for not counted. v6: Avoid outputting metric on empty format. v7: Print metric at the end v8: Remove extra run, ena fields v9: Avoid extra new line for unsupported counters Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
print_noise(counter, noise);
print_running(run, ena);
}
perf_stat__print_shadow_stats(counter, uval,
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
first_shadow_cpu(counter, id),
&out, &metric_events, st);
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (!csv_output && !metric_only) {
perf stat: Implement CSV metrics output Now support CSV output for metrics. With the new output callbacks this is relatively straight forward by creating new callbacks. This allows to easily plot metrics from CSV files. The new line callback needs to know the number of fields to skip them correctly Example output before: % perf stat -x, true 0.200687,,task-clock,200687,100.00 0,,context-switches,200687,100.00 0,,cpu-migrations,200687,100.00 40,,page-faults,200687,100.00 730871,,cycles,203601,100.00 551056,,stalled-cycles-frontend,203601,100.00 <not supported>,,stalled-cycles-backend,0,100.00 385523,,instructions,203601,100.00 78028,,branches,203601,100.00 3946,,branch-misses,203601,100.00 After: % perf stat -x, true .502457,,task-clock,502457,100.00,0.485,CPUs utilized 0,,context-switches,502457,100.00,0.000,K/sec 0,,cpu-migrations,502457,100.00,0.000,K/sec 45,,page-faults,502457,100.00,0.090,M/sec 644692,,cycles,509102,100.00,1.283,GHz 423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle <not supported>,,stalled-cycles-backend,0,100.00,,,, 492701,,instructions,509102,100.00,0.76,insn per cycle ,,,,,0.86,stalled cycles per insn 97767,,branches,509102,100.00,194.578,M/sec 4788,,branch-misses,509102,100.00,4.90,of all branches or easier readable $ perf stat -x, -o x.csv true $ column -s, -t x.csv 0.490635 task-clock 490635 100.00 0.489 CPUs utilized 0 context-switches 490635 100.00 0.000 K/sec 0 cpu-migrations 490635 100.00 0.000 K/sec 45 page-faults 490635 100.00 0.092 M/sec 629080 cycles 497698 100.00 1.282 GHz 409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle <not supported> stalled-cycles-backend 0 100.00 491424 instructions 497698 100.00 0.78 insn per cycle 0.83 stalled cycles per insn 97278 branches 497698 100.00 198.270 M/sec 4569 branch-misses 497698 100.00 4.70 of all branches Two new fields are added: metric value and metric name. v2: Split out function argument changes v3: Reenable metrics for real. v4: Fix wrong hunk from refactoring. v5: Remove extra "noise" printing (Jiri), but add it to the not counted case. Print empty metrics for not counted. v6: Avoid outputting metric on empty format. v7: Print metric at the end v8: Remove extra run, ena fields v9: Avoid extra new line for unsupported counters Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
print_noise(counter, noise);
print_running(run, ena);
}
}
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
static void aggr_update_shadow(void)
{
int cpu, s2, id, s;
u64 val;
struct perf_evsel *counter;
for (s = 0; s < aggr_map->nr; s++) {
id = aggr_map->map[s];
evlist__for_each_entry(evsel_list, counter) {
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
val = 0;
for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
s2 = aggr_get_id(evsel_list->cpus, cpu);
if (s2 != id)
continue;
val += perf_counts(counter->counts, cpu, 0)->val;
}
perf_stat__update_shadow_stats(counter, val,
first_shadow_cpu(counter, id),
&rt_stat);
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
}
}
}
perf stat: Collapse identically named events The uncore PMU has a lot of duplicated PMUs for different subsystems. When expanding an uncore alias we usually end up with a large number of identically named aliases, which makes perf stat output difficult to read. Automatically sum them up in perf stat, unless --no-merge is specified. This can be default because only the uncores generally have duplicated aliases. Other PMUs have unique names. Before: % perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 694,976 Bytes unc_c_llc_lookup.any 706,304 Bytes unc_c_llc_lookup.any 956,608 Bytes unc_c_llc_lookup.any 782,720 Bytes unc_c_llc_lookup.any 605,696 Bytes unc_c_llc_lookup.any 442,816 Bytes unc_c_llc_lookup.any 659,328 Bytes unc_c_llc_lookup.any 509,312 Bytes unc_c_llc_lookup.any 263,936 Bytes unc_c_llc_lookup.any 592,448 Bytes unc_c_llc_lookup.any 672,448 Bytes unc_c_llc_lookup.any 608,640 Bytes unc_c_llc_lookup.any 641,024 Bytes unc_c_llc_lookup.any 856,896 Bytes unc_c_llc_lookup.any 808,832 Bytes unc_c_llc_lookup.any 684,864 Bytes unc_c_llc_lookup.any 710,464 Bytes unc_c_llc_lookup.any 538,304 Bytes unc_c_llc_lookup.any 1.002577660 seconds time elapsed After: % perf stat -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 2,685,120 Bytes unc_c_llc_lookup.any 1.002648032 seconds time elapsed v2: Split collect_aliases. Rename alias flag. v3: Make sure unsupported/not counted is always printed. v4: Factor out callback change into separate patch. v5: Move check for bad results here Move merged check into collect_data Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
static void collect_all_aliases(struct perf_evsel *counter,
void (*cb)(struct perf_evsel *counter, void *data,
bool first),
void *data)
{
perf stat: Collapse identically named events The uncore PMU has a lot of duplicated PMUs for different subsystems. When expanding an uncore alias we usually end up with a large number of identically named aliases, which makes perf stat output difficult to read. Automatically sum them up in perf stat, unless --no-merge is specified. This can be default because only the uncores generally have duplicated aliases. Other PMUs have unique names. Before: % perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 694,976 Bytes unc_c_llc_lookup.any 706,304 Bytes unc_c_llc_lookup.any 956,608 Bytes unc_c_llc_lookup.any 782,720 Bytes unc_c_llc_lookup.any 605,696 Bytes unc_c_llc_lookup.any 442,816 Bytes unc_c_llc_lookup.any 659,328 Bytes unc_c_llc_lookup.any 509,312 Bytes unc_c_llc_lookup.any 263,936 Bytes unc_c_llc_lookup.any 592,448 Bytes unc_c_llc_lookup.any 672,448 Bytes unc_c_llc_lookup.any 608,640 Bytes unc_c_llc_lookup.any 641,024 Bytes unc_c_llc_lookup.any 856,896 Bytes unc_c_llc_lookup.any 808,832 Bytes unc_c_llc_lookup.any 684,864 Bytes unc_c_llc_lookup.any 710,464 Bytes unc_c_llc_lookup.any 538,304 Bytes unc_c_llc_lookup.any 1.002577660 seconds time elapsed After: % perf stat -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 2,685,120 Bytes unc_c_llc_lookup.any 1.002648032 seconds time elapsed v2: Split collect_aliases. Rename alias flag. v3: Make sure unsupported/not counted is always printed. v4: Factor out callback change into separate patch. v5: Move check for bad results here Move merged check into collect_data Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
struct perf_evsel *alias;
alias = list_prepare_entry(counter, &(evsel_list->entries), node);
list_for_each_entry_continue (alias, &evsel_list->entries, node) {
if (strcmp(perf_evsel__name(alias), perf_evsel__name(counter)) ||
alias->scale != counter->scale ||
alias->cgrp != counter->cgrp ||
strcmp(alias->unit, counter->unit) ||
nsec_counter(alias) != nsec_counter(counter))
break;
alias->merged_stat = true;
cb(alias, data, false);
}
}
static bool collect_data(struct perf_evsel *counter,
void (*cb)(struct perf_evsel *counter, void *data,
bool first),
void *data)
{
if (counter->merged_stat)
return false;
cb(counter, data, true);
if (!no_merge && counter->auto_merge_stats)
perf stat: Collapse identically named events The uncore PMU has a lot of duplicated PMUs for different subsystems. When expanding an uncore alias we usually end up with a large number of identically named aliases, which makes perf stat output difficult to read. Automatically sum them up in perf stat, unless --no-merge is specified. This can be default because only the uncores generally have duplicated aliases. Other PMUs have unique names. Before: % perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 694,976 Bytes unc_c_llc_lookup.any 706,304 Bytes unc_c_llc_lookup.any 956,608 Bytes unc_c_llc_lookup.any 782,720 Bytes unc_c_llc_lookup.any 605,696 Bytes unc_c_llc_lookup.any 442,816 Bytes unc_c_llc_lookup.any 659,328 Bytes unc_c_llc_lookup.any 509,312 Bytes unc_c_llc_lookup.any 263,936 Bytes unc_c_llc_lookup.any 592,448 Bytes unc_c_llc_lookup.any 672,448 Bytes unc_c_llc_lookup.any 608,640 Bytes unc_c_llc_lookup.any 641,024 Bytes unc_c_llc_lookup.any 856,896 Bytes unc_c_llc_lookup.any 808,832 Bytes unc_c_llc_lookup.any 684,864 Bytes unc_c_llc_lookup.any 710,464 Bytes unc_c_llc_lookup.any 538,304 Bytes unc_c_llc_lookup.any 1.002577660 seconds time elapsed After: % perf stat -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 2,685,120 Bytes unc_c_llc_lookup.any 1.002648032 seconds time elapsed v2: Split collect_aliases. Rename alias flag. v3: Make sure unsupported/not counted is always printed. v4: Factor out callback change into separate patch. v5: Move check for bad results here Move merged check into collect_data Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
collect_all_aliases(counter, cb, data);
return true;
}
struct aggr_data {
u64 ena, run, val;
int id;
int nr;
int cpu;
};
static void aggr_cb(struct perf_evsel *counter, void *data, bool first)
{
struct aggr_data *ad = data;
int cpu, s2;
for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
struct perf_counts_values *counts;
s2 = aggr_get_id(perf_evsel__cpus(counter), cpu);
if (s2 != ad->id)
continue;
if (first)
ad->nr++;
counts = perf_counts(counter->counts, cpu, 0);
/*
* When any result is bad, make them all to give
* consistent output in interval mode.
*/
if (counts->ena == 0 || counts->run == 0 ||
counter->counts->scaled == -1) {
ad->ena = 0;
ad->run = 0;
break;
}
ad->val += counts->val;
ad->ena += counts->ena;
ad->run += counts->run;
}
}
static void print_aggr(char *prefix)
{
FILE *output = stat_config.output;
struct perf_evsel *counter;
int s, id, nr;
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
double uval;
u64 ena, run, val;
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
bool first;
if (!(aggr_map || aggr_get_id))
return;
perf stat: Support metrics in --per-core/socket mode Enable metrics printing in --per-core / --per-socket mode. We need to save the shadow metrics in a unique place. Always use the first CPU in the aggregation. Then use the same CPU to retrieve the shadow value later. Example output: % perf stat --per-core -a ./BC1s Performance counter stats for 'system wide': S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%) S0-C0 2 49 context-switches # 0.017 K/sec (100.00%) S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%) S0-C0 2 467 page-faults # 0.157 K/sec S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%) S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%) S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%) S0-C0 2 81,180,867 branch-misses # 4.26% of all branches S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%) S0-C1 2 62 context-switches # 0.021 K/sec (100.00%) S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%) S0-C1 2 281 page-faults # 0.095 K/sec S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%) S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%) S0-C1 2 947,121 branches # 0.319 M/sec (100.00%) S0-C1 2 37,322 branch-misses # 3.94% of all branches 1.480409747 seconds time elapsed v2: Rebase to older patches v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
aggr_update_shadow();
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
/*
* With metric_only everything is on a single line.
* Without each counter has its own line.
*/
for (s = 0; s < aggr_map->nr; s++) {
struct aggr_data ad;
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (prefix && metric_only)
fprintf(output, "%s", prefix);
ad.id = id = aggr_map->map[s];
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
first = true;
evlist__for_each_entry(evsel_list, counter) {
if (is_duration_time(counter))
continue;
ad.val = ad.ena = ad.run = 0;
ad.nr = 0;
perf stat: Collapse identically named events The uncore PMU has a lot of duplicated PMUs for different subsystems. When expanding an uncore alias we usually end up with a large number of identically named aliases, which makes perf stat output difficult to read. Automatically sum them up in perf stat, unless --no-merge is specified. This can be default because only the uncores generally have duplicated aliases. Other PMUs have unique names. Before: % perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 694,976 Bytes unc_c_llc_lookup.any 706,304 Bytes unc_c_llc_lookup.any 956,608 Bytes unc_c_llc_lookup.any 782,720 Bytes unc_c_llc_lookup.any 605,696 Bytes unc_c_llc_lookup.any 442,816 Bytes unc_c_llc_lookup.any 659,328 Bytes unc_c_llc_lookup.any 509,312 Bytes unc_c_llc_lookup.any 263,936 Bytes unc_c_llc_lookup.any 592,448 Bytes unc_c_llc_lookup.any 672,448 Bytes unc_c_llc_lookup.any 608,640 Bytes unc_c_llc_lookup.any 641,024 Bytes unc_c_llc_lookup.any 856,896 Bytes unc_c_llc_lookup.any 808,832 Bytes unc_c_llc_lookup.any 684,864 Bytes unc_c_llc_lookup.any 710,464 Bytes unc_c_llc_lookup.any 538,304 Bytes unc_c_llc_lookup.any 1.002577660 seconds time elapsed After: % perf stat -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 2,685,120 Bytes unc_c_llc_lookup.any 1.002648032 seconds time elapsed v2: Split collect_aliases. Rename alias flag. v3: Make sure unsupported/not counted is always printed. v4: Factor out callback change into separate patch. v5: Move check for bad results here Move merged check into collect_data Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
if (!collect_data(counter, aggr_cb, &ad))
continue;
nr = ad.nr;
ena = ad.ena;
run = ad.run;
val = ad.val;
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (first && metric_only) {
first = false;
aggr_printout(counter, id, nr);
}
if (prefix && !metric_only)
fprintf(output, "%s", prefix);
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
uval = val * counter->scale;
printout(id, nr, counter, uval, prefix, run, ena, 1.0,
&rt_stat);
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (!metric_only)
fputc('\n', output);
}
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (metric_only)
fputc('\n', output);
}
}
perf stat: Resort '--per-thread' result There are many threads reported if we enable '--per-thread' globally. 1. Most of the threads are not counted or counting value 0. This patch removes these threads. 2. We also resort the threads in display according to the counting value. It's useful for user to see the hottest threads easily. For example, the new results would be: root@skl:/tmp# perf stat --per-thread ^C Performance counter stats for 'system wide': perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 10 context-switches # 0.117 M/sec kworker/u16:1-18249 7 context-switches # 0.056 M/sec sshd-23111 4 context-switches # 0.014 M/sec vmstat-23127 4 context-switches # 0.003 M/sec perf-24165 4 context-switches # 0.930 K/sec kworker/0:2-19991 3 context-switches # 0.022 M/sec kworker/u16:2-23146 3 context-switches # 0.039 M/sec kworker/4:1-15354 2 context-switches # 0.070 M/sec kworker/6:0-17528 2 context-switches # 0.084 M/sec sshd-23058 2 context-switches # 0.010 M/sec ksoftirqd/0-7 1 context-switches # 0.113 M/sec watchdog/0-11 1 context-switches # 0.092 M/sec watchdog/1-14 1 context-switches # 0.118 M/sec watchdog/2-20 1 context-switches # 0.412 M/sec watchdog/3-26 1 context-switches # 0.346 M/sec watchdog/4-32 1 context-switches # 0.362 M/sec watchdog/5-38 1 context-switches # 0.677 M/sec watchdog/6-44 1 context-switches # 0.671 M/sec watchdog/7-50 1 context-switches # 0.340 M/sec kworker/4:1H-1887 1 context-switches # 0.076 M/sec thermald-2841 1 context-switches # 0.004 M/sec gmain-2700 1 context-switches # 0.024 M/sec irqbalance-2780 1 context-switches # 0.001 M/sec kworker/3:2-12870 1 context-switches # 0.098 M/sec kworker/5:2-31362 1 context-switches # 0.086 M/sec kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec rcu_sched-8 1 cpu-migrations # 0.012 M/sec sshd-23058 1 cpu-migrations # 0.005 M/sec perf-24165 8,833,385 cycles # 2.053 GHz vmstat-23127 1,702,699 cycles # 1.090 GHz irqbalance-2780 739,847 cycles # 0.894 GHz sshd-23111 269,506 cycles # 0.968 GHz thermald-2841 204,556 cycles # 0.886 GHz sshd-23058 158,780 cycles # 0.766 GHz kworker/0:2-19991 112,981 cycles # 0.843 GHz kworker/u16:1-18249 100,926 cycles # 0.803 GHz rcu_sched-8 74,024 cycles # 0.865 GHz kworker/u16:2-23146 55,984 cycles # 0.726 GHz gmain-2700 34,278 cycles # 0.820 GHz kworker/4:1-15354 20,665 cycles # 0.728 GHz kworker/6:0-17528 16,445 cycles # 0.688 GHz kworker/5:2-31362 9,492 cycles # 0.816 GHz watchdog/3-26 8,695 cycles # 3.006 GHz kworker/4:1H-1887 8,238 cycles # 0.624 GHz watchdog/4-32 7,580 cycles # 2.747 GHz kworker/3:2-12870 7,306 cycles # 0.715 GHz watchdog/2-20 7,274 cycles # 2.995 GHz watchdog/0-11 6,988 cycles # 0.642 GHz ksoftirqd/0-7 6,376 cycles # 0.719 GHz watchdog/1-14 5,340 cycles # 0.630 GHz watchdog/5-38 4,061 cycles # 2.749 GHz watchdog/6-44 3,976 cycles # 2.667 GHz watchdog/7-50 3,418 cycles # 1.161 GHz vmstat-23127 2,511,699 instructions # 1.48 insn per cycle perf-24165 1,829,908 instructions # 0.21 insn per cycle irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle thermald-2841 143,544 instructions # 0.70 insn per cycle sshd-23111 128,138 instructions # 0.48 insn per cycle sshd-23058 57,654 instructions # 0.36 insn per cycle rcu_sched-8 44,063 instructions # 0.60 insn per cycle kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle gmain-2700 13,691 instructions # 0.40 insn per cycle kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle watchdog/0-11 3,135 instructions # 0.45 insn per cycle watchdog/1-14 3,135 instructions # 0.59 insn per cycle watchdog/2-20 3,135 instructions # 0.43 insn per cycle watchdog/3-26 3,135 instructions # 0.36 insn per cycle watchdog/4-32 3,135 instructions # 0.41 insn per cycle watchdog/5-38 3,135 instructions # 0.77 insn per cycle watchdog/6-44 3,135 instructions # 0.79 insn per cycle watchdog/7-50 3,135 instructions # 0.92 insn per cycle vmstat-23127 539,181 branches # 345.139 M/sec perf-24165 375,364 branches # 87.245 M/sec irqbalance-2780 262,092 branches # 316.593 M/sec thermald-2841 31,611 branches # 136.915 M/sec sshd-23111 21,874 branches # 78.596 M/sec sshd-23058 10,682 branches # 51.528 M/sec rcu_sched-8 8,693 branches # 101.633 M/sec kworker/u16:1-18249 7,891 branches # 62.808 M/sec kworker/0:2-19991 5,761 branches # 42.998 M/sec kworker/u16:2-23146 4,099 branches # 53.138 M/sec kworker/4:1-15354 2,755 branches # 97.110 M/sec gmain-2700 2,638 branches # 63.127 M/sec kworker/6:0-17528 2,216 branches # 92.739 M/sec kworker/5:2-31362 1,132 branches # 97.360 M/sec kworker/3:2-12870 1,081 branches # 105.773 M/sec kworker/4:1H-1887 725 branches # 54.887 M/sec ksoftirqd/0-7 707 branches # 79.716 M/sec watchdog/0-11 652 branches # 59.860 M/sec watchdog/1-14 652 branches # 76.923 M/sec watchdog/2-20 652 branches # 268.423 M/sec watchdog/3-26 652 branches # 225.372 M/sec watchdog/4-32 652 branches # 236.318 M/sec watchdog/5-38 652 branches # 441.435 M/sec watchdog/6-44 652 branches # 437.290 M/sec watchdog/7-50 652 branches # 221.467 M/sec vmstat-23127 8,960 branch-misses # 1.66% of all branches irqbalance-2780 3,047 branch-misses # 1.16% of all branches perf-24165 2,876 branch-misses # 0.77% of all branches sshd-23111 1,843 branch-misses # 8.43% of all branches thermald-2841 1,444 branch-misses # 4.57% of all branches sshd-23058 1,379 branch-misses # 12.91% of all branches kworker/u16:1-18249 982 branch-misses # 12.44% of all branches rcu_sched-8 893 branch-misses # 10.27% of all branches kworker/u16:2-23146 578 branch-misses # 14.10% of all branches kworker/0:2-19991 376 branch-misses # 6.53% of all branches gmain-2700 280 branch-misses # 10.61% of all branches kworker/6:0-17528 196 branch-misses # 8.84% of all branches kworker/4:1-15354 187 branch-misses # 6.79% of all branches kworker/5:2-31362 123 branch-misses # 10.87% of all branches watchdog/0-11 95 branch-misses # 14.57% of all branches watchdog/4-32 89 branch-misses # 13.65% of all branches kworker/3:2-12870 80 branch-misses # 7.40% of all branches watchdog/3-26 61 branch-misses # 9.36% of all branches kworker/4:1H-1887 60 branch-misses # 8.28% of all branches watchdog/2-20 52 branch-misses # 7.98% of all branches ksoftirqd/0-7 47 branch-misses # 6.65% of all branches watchdog/1-14 46 branch-misses # 7.06% of all branches watchdog/7-50 13 branch-misses # 1.99% of all branches watchdog/5-38 8 branch-misses # 1.23% of all branches watchdog/6-44 7 branch-misses # 1.07% of all branches 3.695150786 seconds time elapsed root@skl:/tmp# perf stat --per-thread -M IPC,CPI ^C Performance counter stats for 'system wide': vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC thermald-2841 1,472,670 inst_retired.any # 1.3 IPC sshd-23111 977,374 inst_retired.any # 1.2 IPC perf-24163 483,779 inst_retired.any # 0.2 IPC gmain-2700 341,213 inst_retired.any # 0.9 IPC sshd-23058 148,891 inst_retired.any # 0.8 IPC rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC rcu_sched-8 14,474 inst_retired.any # 0.8 IPC kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC watchdog/0-11 1,479 inst_retired.any # 0.3 IPC watchdog/1-14 1,479 inst_retired.any # 0.3 IPC watchdog/2-20 1,479 inst_retired.any # 0.4 IPC watchdog/3-26 1,479 inst_retired.any # 0.4 IPC watchdog/4-32 1,479 inst_retired.any # 0.3 IPC watchdog/5-38 1,479 inst_retired.any # 0.3 IPC watchdog/6-44 1,479 inst_retired.any # 0.7 IPC watchdog/7-50 1,479 inst_retired.any # 0.7 IPC kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC perf-24163 2,249,872 cpu_clk_unhalted.thread vmstat-23127 1,352,455 cpu_clk_unhalted.thread thermald-2841 1,161,140 cpu_clk_unhalted.thread sshd-23111 807,827 cpu_clk_unhalted.thread gmain-2700 375,535 cpu_clk_unhalted.thread sshd-23058 194,071 cpu_clk_unhalted.thread kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread kworker/0:2-19991 46,550 cpu_clk_unhalted.thread rcu_sched-8 18,855 cpu_clk_unhalted.thread rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread kworker/4:1-15354 8,812 cpu_clk_unhalted.thread kworker/5:2-31362 6,812 cpu_clk_unhalted.thread kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread kworker/6:0-17528 5,111 cpu_clk_unhalted.thread kworker/7:2-23145 4,667 cpu_clk_unhalted.thread watchdog/0-11 4,663 cpu_clk_unhalted.thread watchdog/1-14 4,663 cpu_clk_unhalted.thread watchdog/4-32 4,626 cpu_clk_unhalted.thread watchdog/5-38 4,403 cpu_clk_unhalted.thread watchdog/3-26 3,936 cpu_clk_unhalted.thread watchdog/2-20 3,850 cpu_clk_unhalted.thread kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread watchdog/6-44 2,017 cpu_clk_unhalted.thread watchdog/7-50 2,017 cpu_clk_unhalted.thread vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI thermald-2841 1,472,670 inst_retired.any # 0.8 CPI sshd-23111 977,374 inst_retired.any # 0.8 CPI perf-24163 495,037 inst_retired.any # 4.7 CPI gmain-2700 341,213 inst_retired.any # 1.1 CPI sshd-23058 148,891 inst_retired.any # 1.3 CPI rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI rcu_sched-8 14,474 inst_retired.any # 1.3 CPI kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI watchdog/0-11 1,479 inst_retired.any # 3.2 CPI watchdog/1-14 1,479 inst_retired.any # 3.2 CPI watchdog/2-20 1,479 inst_retired.any # 2.6 CPI watchdog/3-26 1,479 inst_retired.any # 2.7 CPI watchdog/4-32 1,479 inst_retired.any # 3.1 CPI watchdog/5-38 1,479 inst_retired.any # 3.0 CPI watchdog/6-44 1,479 inst_retired.any # 1.4 CPI watchdog/7-50 1,479 inst_retired.any # 1.4 CPI kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI perf-24163 2,302,323 cycles vmstat-23127 1,352,455 cycles thermald-2841 1,161,140 cycles sshd-23111 807,827 cycles gmain-2700 375,535 cycles sshd-23058 194,071 cycles kworker/u16:1-18249 114,306 cycles rtkit-daemon-3288 103,547 cycles kworker/0:2-19991 46,550 cycles rcu_sched-8 18,855 cycles rtkit-daemon-3289 17,549 cycles kworker/4:1-15354 8,812 cycles kworker/5:2-31362 6,812 cycles kworker/4:1H-1887 5,270 cycles kworker/6:0-17528 5,111 cycles kworker/7:2-23145 4,667 cycles watchdog/0-11 4,663 cycles watchdog/1-14 4,663 cycles watchdog/4-32 4,626 cycles watchdog/5-38 4,403 cycles watchdog/3-26 3,936 cycles watchdog/2-20 3,850 cycles kworker/u16:2-23146 2,654 cycles watchdog/6-44 2,017 cycles watchdog/7-50 2,017 cycles 2.175726600 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
static int cmp_val(const void *a, const void *b)
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
{
perf stat: Resort '--per-thread' result There are many threads reported if we enable '--per-thread' globally. 1. Most of the threads are not counted or counting value 0. This patch removes these threads. 2. We also resort the threads in display according to the counting value. It's useful for user to see the hottest threads easily. For example, the new results would be: root@skl:/tmp# perf stat --per-thread ^C Performance counter stats for 'system wide': perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 10 context-switches # 0.117 M/sec kworker/u16:1-18249 7 context-switches # 0.056 M/sec sshd-23111 4 context-switches # 0.014 M/sec vmstat-23127 4 context-switches # 0.003 M/sec perf-24165 4 context-switches # 0.930 K/sec kworker/0:2-19991 3 context-switches # 0.022 M/sec kworker/u16:2-23146 3 context-switches # 0.039 M/sec kworker/4:1-15354 2 context-switches # 0.070 M/sec kworker/6:0-17528 2 context-switches # 0.084 M/sec sshd-23058 2 context-switches # 0.010 M/sec ksoftirqd/0-7 1 context-switches # 0.113 M/sec watchdog/0-11 1 context-switches # 0.092 M/sec watchdog/1-14 1 context-switches # 0.118 M/sec watchdog/2-20 1 context-switches # 0.412 M/sec watchdog/3-26 1 context-switches # 0.346 M/sec watchdog/4-32 1 context-switches # 0.362 M/sec watchdog/5-38 1 context-switches # 0.677 M/sec watchdog/6-44 1 context-switches # 0.671 M/sec watchdog/7-50 1 context-switches # 0.340 M/sec kworker/4:1H-1887 1 context-switches # 0.076 M/sec thermald-2841 1 context-switches # 0.004 M/sec gmain-2700 1 context-switches # 0.024 M/sec irqbalance-2780 1 context-switches # 0.001 M/sec kworker/3:2-12870 1 context-switches # 0.098 M/sec kworker/5:2-31362 1 context-switches # 0.086 M/sec kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec rcu_sched-8 1 cpu-migrations # 0.012 M/sec sshd-23058 1 cpu-migrations # 0.005 M/sec perf-24165 8,833,385 cycles # 2.053 GHz vmstat-23127 1,702,699 cycles # 1.090 GHz irqbalance-2780 739,847 cycles # 0.894 GHz sshd-23111 269,506 cycles # 0.968 GHz thermald-2841 204,556 cycles # 0.886 GHz sshd-23058 158,780 cycles # 0.766 GHz kworker/0:2-19991 112,981 cycles # 0.843 GHz kworker/u16:1-18249 100,926 cycles # 0.803 GHz rcu_sched-8 74,024 cycles # 0.865 GHz kworker/u16:2-23146 55,984 cycles # 0.726 GHz gmain-2700 34,278 cycles # 0.820 GHz kworker/4:1-15354 20,665 cycles # 0.728 GHz kworker/6:0-17528 16,445 cycles # 0.688 GHz kworker/5:2-31362 9,492 cycles # 0.816 GHz watchdog/3-26 8,695 cycles # 3.006 GHz kworker/4:1H-1887 8,238 cycles # 0.624 GHz watchdog/4-32 7,580 cycles # 2.747 GHz kworker/3:2-12870 7,306 cycles # 0.715 GHz watchdog/2-20 7,274 cycles # 2.995 GHz watchdog/0-11 6,988 cycles # 0.642 GHz ksoftirqd/0-7 6,376 cycles # 0.719 GHz watchdog/1-14 5,340 cycles # 0.630 GHz watchdog/5-38 4,061 cycles # 2.749 GHz watchdog/6-44 3,976 cycles # 2.667 GHz watchdog/7-50 3,418 cycles # 1.161 GHz vmstat-23127 2,511,699 instructions # 1.48 insn per cycle perf-24165 1,829,908 instructions # 0.21 insn per cycle irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle thermald-2841 143,544 instructions # 0.70 insn per cycle sshd-23111 128,138 instructions # 0.48 insn per cycle sshd-23058 57,654 instructions # 0.36 insn per cycle rcu_sched-8 44,063 instructions # 0.60 insn per cycle kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle gmain-2700 13,691 instructions # 0.40 insn per cycle kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle watchdog/0-11 3,135 instructions # 0.45 insn per cycle watchdog/1-14 3,135 instructions # 0.59 insn per cycle watchdog/2-20 3,135 instructions # 0.43 insn per cycle watchdog/3-26 3,135 instructions # 0.36 insn per cycle watchdog/4-32 3,135 instructions # 0.41 insn per cycle watchdog/5-38 3,135 instructions # 0.77 insn per cycle watchdog/6-44 3,135 instructions # 0.79 insn per cycle watchdog/7-50 3,135 instructions # 0.92 insn per cycle vmstat-23127 539,181 branches # 345.139 M/sec perf-24165 375,364 branches # 87.245 M/sec irqbalance-2780 262,092 branches # 316.593 M/sec thermald-2841 31,611 branches # 136.915 M/sec sshd-23111 21,874 branches # 78.596 M/sec sshd-23058 10,682 branches # 51.528 M/sec rcu_sched-8 8,693 branches # 101.633 M/sec kworker/u16:1-18249 7,891 branches # 62.808 M/sec kworker/0:2-19991 5,761 branches # 42.998 M/sec kworker/u16:2-23146 4,099 branches # 53.138 M/sec kworker/4:1-15354 2,755 branches # 97.110 M/sec gmain-2700 2,638 branches # 63.127 M/sec kworker/6:0-17528 2,216 branches # 92.739 M/sec kworker/5:2-31362 1,132 branches # 97.360 M/sec kworker/3:2-12870 1,081 branches # 105.773 M/sec kworker/4:1H-1887 725 branches # 54.887 M/sec ksoftirqd/0-7 707 branches # 79.716 M/sec watchdog/0-11 652 branches # 59.860 M/sec watchdog/1-14 652 branches # 76.923 M/sec watchdog/2-20 652 branches # 268.423 M/sec watchdog/3-26 652 branches # 225.372 M/sec watchdog/4-32 652 branches # 236.318 M/sec watchdog/5-38 652 branches # 441.435 M/sec watchdog/6-44 652 branches # 437.290 M/sec watchdog/7-50 652 branches # 221.467 M/sec vmstat-23127 8,960 branch-misses # 1.66% of all branches irqbalance-2780 3,047 branch-misses # 1.16% of all branches perf-24165 2,876 branch-misses # 0.77% of all branches sshd-23111 1,843 branch-misses # 8.43% of all branches thermald-2841 1,444 branch-misses # 4.57% of all branches sshd-23058 1,379 branch-misses # 12.91% of all branches kworker/u16:1-18249 982 branch-misses # 12.44% of all branches rcu_sched-8 893 branch-misses # 10.27% of all branches kworker/u16:2-23146 578 branch-misses # 14.10% of all branches kworker/0:2-19991 376 branch-misses # 6.53% of all branches gmain-2700 280 branch-misses # 10.61% of all branches kworker/6:0-17528 196 branch-misses # 8.84% of all branches kworker/4:1-15354 187 branch-misses # 6.79% of all branches kworker/5:2-31362 123 branch-misses # 10.87% of all branches watchdog/0-11 95 branch-misses # 14.57% of all branches watchdog/4-32 89 branch-misses # 13.65% of all branches kworker/3:2-12870 80 branch-misses # 7.40% of all branches watchdog/3-26 61 branch-misses # 9.36% of all branches kworker/4:1H-1887 60 branch-misses # 8.28% of all branches watchdog/2-20 52 branch-misses # 7.98% of all branches ksoftirqd/0-7 47 branch-misses # 6.65% of all branches watchdog/1-14 46 branch-misses # 7.06% of all branches watchdog/7-50 13 branch-misses # 1.99% of all branches watchdog/5-38 8 branch-misses # 1.23% of all branches watchdog/6-44 7 branch-misses # 1.07% of all branches 3.695150786 seconds time elapsed root@skl:/tmp# perf stat --per-thread -M IPC,CPI ^C Performance counter stats for 'system wide': vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC thermald-2841 1,472,670 inst_retired.any # 1.3 IPC sshd-23111 977,374 inst_retired.any # 1.2 IPC perf-24163 483,779 inst_retired.any # 0.2 IPC gmain-2700 341,213 inst_retired.any # 0.9 IPC sshd-23058 148,891 inst_retired.any # 0.8 IPC rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC rcu_sched-8 14,474 inst_retired.any # 0.8 IPC kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC watchdog/0-11 1,479 inst_retired.any # 0.3 IPC watchdog/1-14 1,479 inst_retired.any # 0.3 IPC watchdog/2-20 1,479 inst_retired.any # 0.4 IPC watchdog/3-26 1,479 inst_retired.any # 0.4 IPC watchdog/4-32 1,479 inst_retired.any # 0.3 IPC watchdog/5-38 1,479 inst_retired.any # 0.3 IPC watchdog/6-44 1,479 inst_retired.any # 0.7 IPC watchdog/7-50 1,479 inst_retired.any # 0.7 IPC kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC perf-24163 2,249,872 cpu_clk_unhalted.thread vmstat-23127 1,352,455 cpu_clk_unhalted.thread thermald-2841 1,161,140 cpu_clk_unhalted.thread sshd-23111 807,827 cpu_clk_unhalted.thread gmain-2700 375,535 cpu_clk_unhalted.thread sshd-23058 194,071 cpu_clk_unhalted.thread kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread kworker/0:2-19991 46,550 cpu_clk_unhalted.thread rcu_sched-8 18,855 cpu_clk_unhalted.thread rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread kworker/4:1-15354 8,812 cpu_clk_unhalted.thread kworker/5:2-31362 6,812 cpu_clk_unhalted.thread kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread kworker/6:0-17528 5,111 cpu_clk_unhalted.thread kworker/7:2-23145 4,667 cpu_clk_unhalted.thread watchdog/0-11 4,663 cpu_clk_unhalted.thread watchdog/1-14 4,663 cpu_clk_unhalted.thread watchdog/4-32 4,626 cpu_clk_unhalted.thread watchdog/5-38 4,403 cpu_clk_unhalted.thread watchdog/3-26 3,936 cpu_clk_unhalted.thread watchdog/2-20 3,850 cpu_clk_unhalted.thread kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread watchdog/6-44 2,017 cpu_clk_unhalted.thread watchdog/7-50 2,017 cpu_clk_unhalted.thread vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI thermald-2841 1,472,670 inst_retired.any # 0.8 CPI sshd-23111 977,374 inst_retired.any # 0.8 CPI perf-24163 495,037 inst_retired.any # 4.7 CPI gmain-2700 341,213 inst_retired.any # 1.1 CPI sshd-23058 148,891 inst_retired.any # 1.3 CPI rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI rcu_sched-8 14,474 inst_retired.any # 1.3 CPI kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI watchdog/0-11 1,479 inst_retired.any # 3.2 CPI watchdog/1-14 1,479 inst_retired.any # 3.2 CPI watchdog/2-20 1,479 inst_retired.any # 2.6 CPI watchdog/3-26 1,479 inst_retired.any # 2.7 CPI watchdog/4-32 1,479 inst_retired.any # 3.1 CPI watchdog/5-38 1,479 inst_retired.any # 3.0 CPI watchdog/6-44 1,479 inst_retired.any # 1.4 CPI watchdog/7-50 1,479 inst_retired.any # 1.4 CPI kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI perf-24163 2,302,323 cycles vmstat-23127 1,352,455 cycles thermald-2841 1,161,140 cycles sshd-23111 807,827 cycles gmain-2700 375,535 cycles sshd-23058 194,071 cycles kworker/u16:1-18249 114,306 cycles rtkit-daemon-3288 103,547 cycles kworker/0:2-19991 46,550 cycles rcu_sched-8 18,855 cycles rtkit-daemon-3289 17,549 cycles kworker/4:1-15354 8,812 cycles kworker/5:2-31362 6,812 cycles kworker/4:1H-1887 5,270 cycles kworker/6:0-17528 5,111 cycles kworker/7:2-23145 4,667 cycles watchdog/0-11 4,663 cycles watchdog/1-14 4,663 cycles watchdog/4-32 4,626 cycles watchdog/5-38 4,403 cycles watchdog/3-26 3,936 cycles watchdog/2-20 3,850 cycles kworker/u16:2-23146 2,654 cycles watchdog/6-44 2,017 cycles watchdog/7-50 2,017 cycles 2.175726600 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
return ((struct perf_aggr_thread_value *)b)->val -
((struct perf_aggr_thread_value *)a)->val;
}
static struct perf_aggr_thread_value *sort_aggr_thread(
struct perf_evsel *counter,
int nthreads, int ncpus,
int *ret)
{
int cpu, thread, i = 0;
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
double uval;
perf stat: Resort '--per-thread' result There are many threads reported if we enable '--per-thread' globally. 1. Most of the threads are not counted or counting value 0. This patch removes these threads. 2. We also resort the threads in display according to the counting value. It's useful for user to see the hottest threads easily. For example, the new results would be: root@skl:/tmp# perf stat --per-thread ^C Performance counter stats for 'system wide': perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 10 context-switches # 0.117 M/sec kworker/u16:1-18249 7 context-switches # 0.056 M/sec sshd-23111 4 context-switches # 0.014 M/sec vmstat-23127 4 context-switches # 0.003 M/sec perf-24165 4 context-switches # 0.930 K/sec kworker/0:2-19991 3 context-switches # 0.022 M/sec kworker/u16:2-23146 3 context-switches # 0.039 M/sec kworker/4:1-15354 2 context-switches # 0.070 M/sec kworker/6:0-17528 2 context-switches # 0.084 M/sec sshd-23058 2 context-switches # 0.010 M/sec ksoftirqd/0-7 1 context-switches # 0.113 M/sec watchdog/0-11 1 context-switches # 0.092 M/sec watchdog/1-14 1 context-switches # 0.118 M/sec watchdog/2-20 1 context-switches # 0.412 M/sec watchdog/3-26 1 context-switches # 0.346 M/sec watchdog/4-32 1 context-switches # 0.362 M/sec watchdog/5-38 1 context-switches # 0.677 M/sec watchdog/6-44 1 context-switches # 0.671 M/sec watchdog/7-50 1 context-switches # 0.340 M/sec kworker/4:1H-1887 1 context-switches # 0.076 M/sec thermald-2841 1 context-switches # 0.004 M/sec gmain-2700 1 context-switches # 0.024 M/sec irqbalance-2780 1 context-switches # 0.001 M/sec kworker/3:2-12870 1 context-switches # 0.098 M/sec kworker/5:2-31362 1 context-switches # 0.086 M/sec kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec rcu_sched-8 1 cpu-migrations # 0.012 M/sec sshd-23058 1 cpu-migrations # 0.005 M/sec perf-24165 8,833,385 cycles # 2.053 GHz vmstat-23127 1,702,699 cycles # 1.090 GHz irqbalance-2780 739,847 cycles # 0.894 GHz sshd-23111 269,506 cycles # 0.968 GHz thermald-2841 204,556 cycles # 0.886 GHz sshd-23058 158,780 cycles # 0.766 GHz kworker/0:2-19991 112,981 cycles # 0.843 GHz kworker/u16:1-18249 100,926 cycles # 0.803 GHz rcu_sched-8 74,024 cycles # 0.865 GHz kworker/u16:2-23146 55,984 cycles # 0.726 GHz gmain-2700 34,278 cycles # 0.820 GHz kworker/4:1-15354 20,665 cycles # 0.728 GHz kworker/6:0-17528 16,445 cycles # 0.688 GHz kworker/5:2-31362 9,492 cycles # 0.816 GHz watchdog/3-26 8,695 cycles # 3.006 GHz kworker/4:1H-1887 8,238 cycles # 0.624 GHz watchdog/4-32 7,580 cycles # 2.747 GHz kworker/3:2-12870 7,306 cycles # 0.715 GHz watchdog/2-20 7,274 cycles # 2.995 GHz watchdog/0-11 6,988 cycles # 0.642 GHz ksoftirqd/0-7 6,376 cycles # 0.719 GHz watchdog/1-14 5,340 cycles # 0.630 GHz watchdog/5-38 4,061 cycles # 2.749 GHz watchdog/6-44 3,976 cycles # 2.667 GHz watchdog/7-50 3,418 cycles # 1.161 GHz vmstat-23127 2,511,699 instructions # 1.48 insn per cycle perf-24165 1,829,908 instructions # 0.21 insn per cycle irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle thermald-2841 143,544 instructions # 0.70 insn per cycle sshd-23111 128,138 instructions # 0.48 insn per cycle sshd-23058 57,654 instructions # 0.36 insn per cycle rcu_sched-8 44,063 instructions # 0.60 insn per cycle kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle gmain-2700 13,691 instructions # 0.40 insn per cycle kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle watchdog/0-11 3,135 instructions # 0.45 insn per cycle watchdog/1-14 3,135 instructions # 0.59 insn per cycle watchdog/2-20 3,135 instructions # 0.43 insn per cycle watchdog/3-26 3,135 instructions # 0.36 insn per cycle watchdog/4-32 3,135 instructions # 0.41 insn per cycle watchdog/5-38 3,135 instructions # 0.77 insn per cycle watchdog/6-44 3,135 instructions # 0.79 insn per cycle watchdog/7-50 3,135 instructions # 0.92 insn per cycle vmstat-23127 539,181 branches # 345.139 M/sec perf-24165 375,364 branches # 87.245 M/sec irqbalance-2780 262,092 branches # 316.593 M/sec thermald-2841 31,611 branches # 136.915 M/sec sshd-23111 21,874 branches # 78.596 M/sec sshd-23058 10,682 branches # 51.528 M/sec rcu_sched-8 8,693 branches # 101.633 M/sec kworker/u16:1-18249 7,891 branches # 62.808 M/sec kworker/0:2-19991 5,761 branches # 42.998 M/sec kworker/u16:2-23146 4,099 branches # 53.138 M/sec kworker/4:1-15354 2,755 branches # 97.110 M/sec gmain-2700 2,638 branches # 63.127 M/sec kworker/6:0-17528 2,216 branches # 92.739 M/sec kworker/5:2-31362 1,132 branches # 97.360 M/sec kworker/3:2-12870 1,081 branches # 105.773 M/sec kworker/4:1H-1887 725 branches # 54.887 M/sec ksoftirqd/0-7 707 branches # 79.716 M/sec watchdog/0-11 652 branches # 59.860 M/sec watchdog/1-14 652 branches # 76.923 M/sec watchdog/2-20 652 branches # 268.423 M/sec watchdog/3-26 652 branches # 225.372 M/sec watchdog/4-32 652 branches # 236.318 M/sec watchdog/5-38 652 branches # 441.435 M/sec watchdog/6-44 652 branches # 437.290 M/sec watchdog/7-50 652 branches # 221.467 M/sec vmstat-23127 8,960 branch-misses # 1.66% of all branches irqbalance-2780 3,047 branch-misses # 1.16% of all branches perf-24165 2,876 branch-misses # 0.77% of all branches sshd-23111 1,843 branch-misses # 8.43% of all branches thermald-2841 1,444 branch-misses # 4.57% of all branches sshd-23058 1,379 branch-misses # 12.91% of all branches kworker/u16:1-18249 982 branch-misses # 12.44% of all branches rcu_sched-8 893 branch-misses # 10.27% of all branches kworker/u16:2-23146 578 branch-misses # 14.10% of all branches kworker/0:2-19991 376 branch-misses # 6.53% of all branches gmain-2700 280 branch-misses # 10.61% of all branches kworker/6:0-17528 196 branch-misses # 8.84% of all branches kworker/4:1-15354 187 branch-misses # 6.79% of all branches kworker/5:2-31362 123 branch-misses # 10.87% of all branches watchdog/0-11 95 branch-misses # 14.57% of all branches watchdog/4-32 89 branch-misses # 13.65% of all branches kworker/3:2-12870 80 branch-misses # 7.40% of all branches watchdog/3-26 61 branch-misses # 9.36% of all branches kworker/4:1H-1887 60 branch-misses # 8.28% of all branches watchdog/2-20 52 branch-misses # 7.98% of all branches ksoftirqd/0-7 47 branch-misses # 6.65% of all branches watchdog/1-14 46 branch-misses # 7.06% of all branches watchdog/7-50 13 branch-misses # 1.99% of all branches watchdog/5-38 8 branch-misses # 1.23% of all branches watchdog/6-44 7 branch-misses # 1.07% of all branches 3.695150786 seconds time elapsed root@skl:/tmp# perf stat --per-thread -M IPC,CPI ^C Performance counter stats for 'system wide': vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC thermald-2841 1,472,670 inst_retired.any # 1.3 IPC sshd-23111 977,374 inst_retired.any # 1.2 IPC perf-24163 483,779 inst_retired.any # 0.2 IPC gmain-2700 341,213 inst_retired.any # 0.9 IPC sshd-23058 148,891 inst_retired.any # 0.8 IPC rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC rcu_sched-8 14,474 inst_retired.any # 0.8 IPC kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC watchdog/0-11 1,479 inst_retired.any # 0.3 IPC watchdog/1-14 1,479 inst_retired.any # 0.3 IPC watchdog/2-20 1,479 inst_retired.any # 0.4 IPC watchdog/3-26 1,479 inst_retired.any # 0.4 IPC watchdog/4-32 1,479 inst_retired.any # 0.3 IPC watchdog/5-38 1,479 inst_retired.any # 0.3 IPC watchdog/6-44 1,479 inst_retired.any # 0.7 IPC watchdog/7-50 1,479 inst_retired.any # 0.7 IPC kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC perf-24163 2,249,872 cpu_clk_unhalted.thread vmstat-23127 1,352,455 cpu_clk_unhalted.thread thermald-2841 1,161,140 cpu_clk_unhalted.thread sshd-23111 807,827 cpu_clk_unhalted.thread gmain-2700 375,535 cpu_clk_unhalted.thread sshd-23058 194,071 cpu_clk_unhalted.thread kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread kworker/0:2-19991 46,550 cpu_clk_unhalted.thread rcu_sched-8 18,855 cpu_clk_unhalted.thread rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread kworker/4:1-15354 8,812 cpu_clk_unhalted.thread kworker/5:2-31362 6,812 cpu_clk_unhalted.thread kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread kworker/6:0-17528 5,111 cpu_clk_unhalted.thread kworker/7:2-23145 4,667 cpu_clk_unhalted.thread watchdog/0-11 4,663 cpu_clk_unhalted.thread watchdog/1-14 4,663 cpu_clk_unhalted.thread watchdog/4-32 4,626 cpu_clk_unhalted.thread watchdog/5-38 4,403 cpu_clk_unhalted.thread watchdog/3-26 3,936 cpu_clk_unhalted.thread watchdog/2-20 3,850 cpu_clk_unhalted.thread kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread watchdog/6-44 2,017 cpu_clk_unhalted.thread watchdog/7-50 2,017 cpu_clk_unhalted.thread vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI thermald-2841 1,472,670 inst_retired.any # 0.8 CPI sshd-23111 977,374 inst_retired.any # 0.8 CPI perf-24163 495,037 inst_retired.any # 4.7 CPI gmain-2700 341,213 inst_retired.any # 1.1 CPI sshd-23058 148,891 inst_retired.any # 1.3 CPI rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI rcu_sched-8 14,474 inst_retired.any # 1.3 CPI kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI watchdog/0-11 1,479 inst_retired.any # 3.2 CPI watchdog/1-14 1,479 inst_retired.any # 3.2 CPI watchdog/2-20 1,479 inst_retired.any # 2.6 CPI watchdog/3-26 1,479 inst_retired.any # 2.7 CPI watchdog/4-32 1,479 inst_retired.any # 3.1 CPI watchdog/5-38 1,479 inst_retired.any # 3.0 CPI watchdog/6-44 1,479 inst_retired.any # 1.4 CPI watchdog/7-50 1,479 inst_retired.any # 1.4 CPI kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI perf-24163 2,302,323 cycles vmstat-23127 1,352,455 cycles thermald-2841 1,161,140 cycles sshd-23111 807,827 cycles gmain-2700 375,535 cycles sshd-23058 194,071 cycles kworker/u16:1-18249 114,306 cycles rtkit-daemon-3288 103,547 cycles kworker/0:2-19991 46,550 cycles rcu_sched-8 18,855 cycles rtkit-daemon-3289 17,549 cycles kworker/4:1-15354 8,812 cycles kworker/5:2-31362 6,812 cycles kworker/4:1H-1887 5,270 cycles kworker/6:0-17528 5,111 cycles kworker/7:2-23145 4,667 cycles watchdog/0-11 4,663 cycles watchdog/1-14 4,663 cycles watchdog/4-32 4,626 cycles watchdog/5-38 4,403 cycles watchdog/3-26 3,936 cycles watchdog/2-20 3,850 cycles kworker/u16:2-23146 2,654 cycles watchdog/6-44 2,017 cycles watchdog/7-50 2,017 cycles 2.175726600 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
struct perf_aggr_thread_value *buf;
buf = calloc(nthreads, sizeof(struct perf_aggr_thread_value));
if (!buf)
return NULL;
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
for (thread = 0; thread < nthreads; thread++) {
u64 ena = 0, run = 0, val = 0;
for (cpu = 0; cpu < ncpus; cpu++) {
val += perf_counts(counter->counts, cpu, thread)->val;
ena += perf_counts(counter->counts, cpu, thread)->ena;
run += perf_counts(counter->counts, cpu, thread)->run;
}
perf stat: Resort '--per-thread' result There are many threads reported if we enable '--per-thread' globally. 1. Most of the threads are not counted or counting value 0. This patch removes these threads. 2. We also resort the threads in display according to the counting value. It's useful for user to see the hottest threads easily. For example, the new results would be: root@skl:/tmp# perf stat --per-thread ^C Performance counter stats for 'system wide': perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 10 context-switches # 0.117 M/sec kworker/u16:1-18249 7 context-switches # 0.056 M/sec sshd-23111 4 context-switches # 0.014 M/sec vmstat-23127 4 context-switches # 0.003 M/sec perf-24165 4 context-switches # 0.930 K/sec kworker/0:2-19991 3 context-switches # 0.022 M/sec kworker/u16:2-23146 3 context-switches # 0.039 M/sec kworker/4:1-15354 2 context-switches # 0.070 M/sec kworker/6:0-17528 2 context-switches # 0.084 M/sec sshd-23058 2 context-switches # 0.010 M/sec ksoftirqd/0-7 1 context-switches # 0.113 M/sec watchdog/0-11 1 context-switches # 0.092 M/sec watchdog/1-14 1 context-switches # 0.118 M/sec watchdog/2-20 1 context-switches # 0.412 M/sec watchdog/3-26 1 context-switches # 0.346 M/sec watchdog/4-32 1 context-switches # 0.362 M/sec watchdog/5-38 1 context-switches # 0.677 M/sec watchdog/6-44 1 context-switches # 0.671 M/sec watchdog/7-50 1 context-switches # 0.340 M/sec kworker/4:1H-1887 1 context-switches # 0.076 M/sec thermald-2841 1 context-switches # 0.004 M/sec gmain-2700 1 context-switches # 0.024 M/sec irqbalance-2780 1 context-switches # 0.001 M/sec kworker/3:2-12870 1 context-switches # 0.098 M/sec kworker/5:2-31362 1 context-switches # 0.086 M/sec kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec rcu_sched-8 1 cpu-migrations # 0.012 M/sec sshd-23058 1 cpu-migrations # 0.005 M/sec perf-24165 8,833,385 cycles # 2.053 GHz vmstat-23127 1,702,699 cycles # 1.090 GHz irqbalance-2780 739,847 cycles # 0.894 GHz sshd-23111 269,506 cycles # 0.968 GHz thermald-2841 204,556 cycles # 0.886 GHz sshd-23058 158,780 cycles # 0.766 GHz kworker/0:2-19991 112,981 cycles # 0.843 GHz kworker/u16:1-18249 100,926 cycles # 0.803 GHz rcu_sched-8 74,024 cycles # 0.865 GHz kworker/u16:2-23146 55,984 cycles # 0.726 GHz gmain-2700 34,278 cycles # 0.820 GHz kworker/4:1-15354 20,665 cycles # 0.728 GHz kworker/6:0-17528 16,445 cycles # 0.688 GHz kworker/5:2-31362 9,492 cycles # 0.816 GHz watchdog/3-26 8,695 cycles # 3.006 GHz kworker/4:1H-1887 8,238 cycles # 0.624 GHz watchdog/4-32 7,580 cycles # 2.747 GHz kworker/3:2-12870 7,306 cycles # 0.715 GHz watchdog/2-20 7,274 cycles # 2.995 GHz watchdog/0-11 6,988 cycles # 0.642 GHz ksoftirqd/0-7 6,376 cycles # 0.719 GHz watchdog/1-14 5,340 cycles # 0.630 GHz watchdog/5-38 4,061 cycles # 2.749 GHz watchdog/6-44 3,976 cycles # 2.667 GHz watchdog/7-50 3,418 cycles # 1.161 GHz vmstat-23127 2,511,699 instructions # 1.48 insn per cycle perf-24165 1,829,908 instructions # 0.21 insn per cycle irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle thermald-2841 143,544 instructions # 0.70 insn per cycle sshd-23111 128,138 instructions # 0.48 insn per cycle sshd-23058 57,654 instructions # 0.36 insn per cycle rcu_sched-8 44,063 instructions # 0.60 insn per cycle kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle gmain-2700 13,691 instructions # 0.40 insn per cycle kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle watchdog/0-11 3,135 instructions # 0.45 insn per cycle watchdog/1-14 3,135 instructions # 0.59 insn per cycle watchdog/2-20 3,135 instructions # 0.43 insn per cycle watchdog/3-26 3,135 instructions # 0.36 insn per cycle watchdog/4-32 3,135 instructions # 0.41 insn per cycle watchdog/5-38 3,135 instructions # 0.77 insn per cycle watchdog/6-44 3,135 instructions # 0.79 insn per cycle watchdog/7-50 3,135 instructions # 0.92 insn per cycle vmstat-23127 539,181 branches # 345.139 M/sec perf-24165 375,364 branches # 87.245 M/sec irqbalance-2780 262,092 branches # 316.593 M/sec thermald-2841 31,611 branches # 136.915 M/sec sshd-23111 21,874 branches # 78.596 M/sec sshd-23058 10,682 branches # 51.528 M/sec rcu_sched-8 8,693 branches # 101.633 M/sec kworker/u16:1-18249 7,891 branches # 62.808 M/sec kworker/0:2-19991 5,761 branches # 42.998 M/sec kworker/u16:2-23146 4,099 branches # 53.138 M/sec kworker/4:1-15354 2,755 branches # 97.110 M/sec gmain-2700 2,638 branches # 63.127 M/sec kworker/6:0-17528 2,216 branches # 92.739 M/sec kworker/5:2-31362 1,132 branches # 97.360 M/sec kworker/3:2-12870 1,081 branches # 105.773 M/sec kworker/4:1H-1887 725 branches # 54.887 M/sec ksoftirqd/0-7 707 branches # 79.716 M/sec watchdog/0-11 652 branches # 59.860 M/sec watchdog/1-14 652 branches # 76.923 M/sec watchdog/2-20 652 branches # 268.423 M/sec watchdog/3-26 652 branches # 225.372 M/sec watchdog/4-32 652 branches # 236.318 M/sec watchdog/5-38 652 branches # 441.435 M/sec watchdog/6-44 652 branches # 437.290 M/sec watchdog/7-50 652 branches # 221.467 M/sec vmstat-23127 8,960 branch-misses # 1.66% of all branches irqbalance-2780 3,047 branch-misses # 1.16% of all branches perf-24165 2,876 branch-misses # 0.77% of all branches sshd-23111 1,843 branch-misses # 8.43% of all branches thermald-2841 1,444 branch-misses # 4.57% of all branches sshd-23058 1,379 branch-misses # 12.91% of all branches kworker/u16:1-18249 982 branch-misses # 12.44% of all branches rcu_sched-8 893 branch-misses # 10.27% of all branches kworker/u16:2-23146 578 branch-misses # 14.10% of all branches kworker/0:2-19991 376 branch-misses # 6.53% of all branches gmain-2700 280 branch-misses # 10.61% of all branches kworker/6:0-17528 196 branch-misses # 8.84% of all branches kworker/4:1-15354 187 branch-misses # 6.79% of all branches kworker/5:2-31362 123 branch-misses # 10.87% of all branches watchdog/0-11 95 branch-misses # 14.57% of all branches watchdog/4-32 89 branch-misses # 13.65% of all branches kworker/3:2-12870 80 branch-misses # 7.40% of all branches watchdog/3-26 61 branch-misses # 9.36% of all branches kworker/4:1H-1887 60 branch-misses # 8.28% of all branches watchdog/2-20 52 branch-misses # 7.98% of all branches ksoftirqd/0-7 47 branch-misses # 6.65% of all branches watchdog/1-14 46 branch-misses # 7.06% of all branches watchdog/7-50 13 branch-misses # 1.99% of all branches watchdog/5-38 8 branch-misses # 1.23% of all branches watchdog/6-44 7 branch-misses # 1.07% of all branches 3.695150786 seconds time elapsed root@skl:/tmp# perf stat --per-thread -M IPC,CPI ^C Performance counter stats for 'system wide': vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC thermald-2841 1,472,670 inst_retired.any # 1.3 IPC sshd-23111 977,374 inst_retired.any # 1.2 IPC perf-24163 483,779 inst_retired.any # 0.2 IPC gmain-2700 341,213 inst_retired.any # 0.9 IPC sshd-23058 148,891 inst_retired.any # 0.8 IPC rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC rcu_sched-8 14,474 inst_retired.any # 0.8 IPC kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC watchdog/0-11 1,479 inst_retired.any # 0.3 IPC watchdog/1-14 1,479 inst_retired.any # 0.3 IPC watchdog/2-20 1,479 inst_retired.any # 0.4 IPC watchdog/3-26 1,479 inst_retired.any # 0.4 IPC watchdog/4-32 1,479 inst_retired.any # 0.3 IPC watchdog/5-38 1,479 inst_retired.any # 0.3 IPC watchdog/6-44 1,479 inst_retired.any # 0.7 IPC watchdog/7-50 1,479 inst_retired.any # 0.7 IPC kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC perf-24163 2,249,872 cpu_clk_unhalted.thread vmstat-23127 1,352,455 cpu_clk_unhalted.thread thermald-2841 1,161,140 cpu_clk_unhalted.thread sshd-23111 807,827 cpu_clk_unhalted.thread gmain-2700 375,535 cpu_clk_unhalted.thread sshd-23058 194,071 cpu_clk_unhalted.thread kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread kworker/0:2-19991 46,550 cpu_clk_unhalted.thread rcu_sched-8 18,855 cpu_clk_unhalted.thread rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread kworker/4:1-15354 8,812 cpu_clk_unhalted.thread kworker/5:2-31362 6,812 cpu_clk_unhalted.thread kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread kworker/6:0-17528 5,111 cpu_clk_unhalted.thread kworker/7:2-23145 4,667 cpu_clk_unhalted.thread watchdog/0-11 4,663 cpu_clk_unhalted.thread watchdog/1-14 4,663 cpu_clk_unhalted.thread watchdog/4-32 4,626 cpu_clk_unhalted.thread watchdog/5-38 4,403 cpu_clk_unhalted.thread watchdog/3-26 3,936 cpu_clk_unhalted.thread watchdog/2-20 3,850 cpu_clk_unhalted.thread kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread watchdog/6-44 2,017 cpu_clk_unhalted.thread watchdog/7-50 2,017 cpu_clk_unhalted.thread vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI thermald-2841 1,472,670 inst_retired.any # 0.8 CPI sshd-23111 977,374 inst_retired.any # 0.8 CPI perf-24163 495,037 inst_retired.any # 4.7 CPI gmain-2700 341,213 inst_retired.any # 1.1 CPI sshd-23058 148,891 inst_retired.any # 1.3 CPI rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI rcu_sched-8 14,474 inst_retired.any # 1.3 CPI kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI watchdog/0-11 1,479 inst_retired.any # 3.2 CPI watchdog/1-14 1,479 inst_retired.any # 3.2 CPI watchdog/2-20 1,479 inst_retired.any # 2.6 CPI watchdog/3-26 1,479 inst_retired.any # 2.7 CPI watchdog/4-32 1,479 inst_retired.any # 3.1 CPI watchdog/5-38 1,479 inst_retired.any # 3.0 CPI watchdog/6-44 1,479 inst_retired.any # 1.4 CPI watchdog/7-50 1,479 inst_retired.any # 1.4 CPI kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI perf-24163 2,302,323 cycles vmstat-23127 1,352,455 cycles thermald-2841 1,161,140 cycles sshd-23111 807,827 cycles gmain-2700 375,535 cycles sshd-23058 194,071 cycles kworker/u16:1-18249 114,306 cycles rtkit-daemon-3288 103,547 cycles kworker/0:2-19991 46,550 cycles rcu_sched-8 18,855 cycles rtkit-daemon-3289 17,549 cycles kworker/4:1-15354 8,812 cycles kworker/5:2-31362 6,812 cycles kworker/4:1H-1887 5,270 cycles kworker/6:0-17528 5,111 cycles kworker/7:2-23145 4,667 cycles watchdog/0-11 4,663 cycles watchdog/1-14 4,663 cycles watchdog/4-32 4,626 cycles watchdog/5-38 4,403 cycles watchdog/3-26 3,936 cycles watchdog/2-20 3,850 cycles kworker/u16:2-23146 2,654 cycles watchdog/6-44 2,017 cycles watchdog/7-50 2,017 cycles 2.175726600 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
uval = val * counter->scale;
/*
* Skip value 0 when enabling --per-thread globally,
* otherwise too many 0 output.
*/
if (uval == 0.0 && target__has_per_thread(&target))
continue;
buf[i].counter = counter;
buf[i].id = thread;
buf[i].uval = uval;
buf[i].val = val;
buf[i].run = run;
buf[i].ena = ena;
i++;
}
qsort(buf, i, sizeof(struct perf_aggr_thread_value), cmp_val);
if (ret)
*ret = i;
return buf;
}
static void print_aggr_thread(struct perf_evsel *counter, char *prefix)
{
FILE *output = stat_config.output;
int nthreads = thread_map__nr(counter->threads);
int ncpus = cpu_map__nr(counter->cpus);
int thread, sorted_threads, id;
struct perf_aggr_thread_value *buf;
buf = sort_aggr_thread(counter, nthreads, ncpus, &sorted_threads);
if (!buf) {
perror("cannot sort aggr thread");
return;
}
for (thread = 0; thread < sorted_threads; thread++) {
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
if (prefix)
fprintf(output, "%s", prefix);
perf stat: Resort '--per-thread' result There are many threads reported if we enable '--per-thread' globally. 1. Most of the threads are not counted or counting value 0. This patch removes these threads. 2. We also resort the threads in display according to the counting value. It's useful for user to see the hottest threads easily. For example, the new results would be: root@skl:/tmp# perf stat --per-thread ^C Performance counter stats for 'system wide': perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 10 context-switches # 0.117 M/sec kworker/u16:1-18249 7 context-switches # 0.056 M/sec sshd-23111 4 context-switches # 0.014 M/sec vmstat-23127 4 context-switches # 0.003 M/sec perf-24165 4 context-switches # 0.930 K/sec kworker/0:2-19991 3 context-switches # 0.022 M/sec kworker/u16:2-23146 3 context-switches # 0.039 M/sec kworker/4:1-15354 2 context-switches # 0.070 M/sec kworker/6:0-17528 2 context-switches # 0.084 M/sec sshd-23058 2 context-switches # 0.010 M/sec ksoftirqd/0-7 1 context-switches # 0.113 M/sec watchdog/0-11 1 context-switches # 0.092 M/sec watchdog/1-14 1 context-switches # 0.118 M/sec watchdog/2-20 1 context-switches # 0.412 M/sec watchdog/3-26 1 context-switches # 0.346 M/sec watchdog/4-32 1 context-switches # 0.362 M/sec watchdog/5-38 1 context-switches # 0.677 M/sec watchdog/6-44 1 context-switches # 0.671 M/sec watchdog/7-50 1 context-switches # 0.340 M/sec kworker/4:1H-1887 1 context-switches # 0.076 M/sec thermald-2841 1 context-switches # 0.004 M/sec gmain-2700 1 context-switches # 0.024 M/sec irqbalance-2780 1 context-switches # 0.001 M/sec kworker/3:2-12870 1 context-switches # 0.098 M/sec kworker/5:2-31362 1 context-switches # 0.086 M/sec kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec rcu_sched-8 1 cpu-migrations # 0.012 M/sec sshd-23058 1 cpu-migrations # 0.005 M/sec perf-24165 8,833,385 cycles # 2.053 GHz vmstat-23127 1,702,699 cycles # 1.090 GHz irqbalance-2780 739,847 cycles # 0.894 GHz sshd-23111 269,506 cycles # 0.968 GHz thermald-2841 204,556 cycles # 0.886 GHz sshd-23058 158,780 cycles # 0.766 GHz kworker/0:2-19991 112,981 cycles # 0.843 GHz kworker/u16:1-18249 100,926 cycles # 0.803 GHz rcu_sched-8 74,024 cycles # 0.865 GHz kworker/u16:2-23146 55,984 cycles # 0.726 GHz gmain-2700 34,278 cycles # 0.820 GHz kworker/4:1-15354 20,665 cycles # 0.728 GHz kworker/6:0-17528 16,445 cycles # 0.688 GHz kworker/5:2-31362 9,492 cycles # 0.816 GHz watchdog/3-26 8,695 cycles # 3.006 GHz kworker/4:1H-1887 8,238 cycles # 0.624 GHz watchdog/4-32 7,580 cycles # 2.747 GHz kworker/3:2-12870 7,306 cycles # 0.715 GHz watchdog/2-20 7,274 cycles # 2.995 GHz watchdog/0-11 6,988 cycles # 0.642 GHz ksoftirqd/0-7 6,376 cycles # 0.719 GHz watchdog/1-14 5,340 cycles # 0.630 GHz watchdog/5-38 4,061 cycles # 2.749 GHz watchdog/6-44 3,976 cycles # 2.667 GHz watchdog/7-50 3,418 cycles # 1.161 GHz vmstat-23127 2,511,699 instructions # 1.48 insn per cycle perf-24165 1,829,908 instructions # 0.21 insn per cycle irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle thermald-2841 143,544 instructions # 0.70 insn per cycle sshd-23111 128,138 instructions # 0.48 insn per cycle sshd-23058 57,654 instructions # 0.36 insn per cycle rcu_sched-8 44,063 instructions # 0.60 insn per cycle kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle gmain-2700 13,691 instructions # 0.40 insn per cycle kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle watchdog/0-11 3,135 instructions # 0.45 insn per cycle watchdog/1-14 3,135 instructions # 0.59 insn per cycle watchdog/2-20 3,135 instructions # 0.43 insn per cycle watchdog/3-26 3,135 instructions # 0.36 insn per cycle watchdog/4-32 3,135 instructions # 0.41 insn per cycle watchdog/5-38 3,135 instructions # 0.77 insn per cycle watchdog/6-44 3,135 instructions # 0.79 insn per cycle watchdog/7-50 3,135 instructions # 0.92 insn per cycle vmstat-23127 539,181 branches # 345.139 M/sec perf-24165 375,364 branches # 87.245 M/sec irqbalance-2780 262,092 branches # 316.593 M/sec thermald-2841 31,611 branches # 136.915 M/sec sshd-23111 21,874 branches # 78.596 M/sec sshd-23058 10,682 branches # 51.528 M/sec rcu_sched-8 8,693 branches # 101.633 M/sec kworker/u16:1-18249 7,891 branches # 62.808 M/sec kworker/0:2-19991 5,761 branches # 42.998 M/sec kworker/u16:2-23146 4,099 branches # 53.138 M/sec kworker/4:1-15354 2,755 branches # 97.110 M/sec gmain-2700 2,638 branches # 63.127 M/sec kworker/6:0-17528 2,216 branches # 92.739 M/sec kworker/5:2-31362 1,132 branches # 97.360 M/sec kworker/3:2-12870 1,081 branches # 105.773 M/sec kworker/4:1H-1887 725 branches # 54.887 M/sec ksoftirqd/0-7 707 branches # 79.716 M/sec watchdog/0-11 652 branches # 59.860 M/sec watchdog/1-14 652 branches # 76.923 M/sec watchdog/2-20 652 branches # 268.423 M/sec watchdog/3-26 652 branches # 225.372 M/sec watchdog/4-32 652 branches # 236.318 M/sec watchdog/5-38 652 branches # 441.435 M/sec watchdog/6-44 652 branches # 437.290 M/sec watchdog/7-50 652 branches # 221.467 M/sec vmstat-23127 8,960 branch-misses # 1.66% of all branches irqbalance-2780 3,047 branch-misses # 1.16% of all branches perf-24165 2,876 branch-misses # 0.77% of all branches sshd-23111 1,843 branch-misses # 8.43% of all branches thermald-2841 1,444 branch-misses # 4.57% of all branches sshd-23058 1,379 branch-misses # 12.91% of all branches kworker/u16:1-18249 982 branch-misses # 12.44% of all branches rcu_sched-8 893 branch-misses # 10.27% of all branches kworker/u16:2-23146 578 branch-misses # 14.10% of all branches kworker/0:2-19991 376 branch-misses # 6.53% of all branches gmain-2700 280 branch-misses # 10.61% of all branches kworker/6:0-17528 196 branch-misses # 8.84% of all branches kworker/4:1-15354 187 branch-misses # 6.79% of all branches kworker/5:2-31362 123 branch-misses # 10.87% of all branches watchdog/0-11 95 branch-misses # 14.57% of all branches watchdog/4-32 89 branch-misses # 13.65% of all branches kworker/3:2-12870 80 branch-misses # 7.40% of all branches watchdog/3-26 61 branch-misses # 9.36% of all branches kworker/4:1H-1887 60 branch-misses # 8.28% of all branches watchdog/2-20 52 branch-misses # 7.98% of all branches ksoftirqd/0-7 47 branch-misses # 6.65% of all branches watchdog/1-14 46 branch-misses # 7.06% of all branches watchdog/7-50 13 branch-misses # 1.99% of all branches watchdog/5-38 8 branch-misses # 1.23% of all branches watchdog/6-44 7 branch-misses # 1.07% of all branches 3.695150786 seconds time elapsed root@skl:/tmp# perf stat --per-thread -M IPC,CPI ^C Performance counter stats for 'system wide': vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC thermald-2841 1,472,670 inst_retired.any # 1.3 IPC sshd-23111 977,374 inst_retired.any # 1.2 IPC perf-24163 483,779 inst_retired.any # 0.2 IPC gmain-2700 341,213 inst_retired.any # 0.9 IPC sshd-23058 148,891 inst_retired.any # 0.8 IPC rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC rcu_sched-8 14,474 inst_retired.any # 0.8 IPC kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC watchdog/0-11 1,479 inst_retired.any # 0.3 IPC watchdog/1-14 1,479 inst_retired.any # 0.3 IPC watchdog/2-20 1,479 inst_retired.any # 0.4 IPC watchdog/3-26 1,479 inst_retired.any # 0.4 IPC watchdog/4-32 1,479 inst_retired.any # 0.3 IPC watchdog/5-38 1,479 inst_retired.any # 0.3 IPC watchdog/6-44 1,479 inst_retired.any # 0.7 IPC watchdog/7-50 1,479 inst_retired.any # 0.7 IPC kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC perf-24163 2,249,872 cpu_clk_unhalted.thread vmstat-23127 1,352,455 cpu_clk_unhalted.thread thermald-2841 1,161,140 cpu_clk_unhalted.thread sshd-23111 807,827 cpu_clk_unhalted.thread gmain-2700 375,535 cpu_clk_unhalted.thread sshd-23058 194,071 cpu_clk_unhalted.thread kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread kworker/0:2-19991 46,550 cpu_clk_unhalted.thread rcu_sched-8 18,855 cpu_clk_unhalted.thread rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread kworker/4:1-15354 8,812 cpu_clk_unhalted.thread kworker/5:2-31362 6,812 cpu_clk_unhalted.thread kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread kworker/6:0-17528 5,111 cpu_clk_unhalted.thread kworker/7:2-23145 4,667 cpu_clk_unhalted.thread watchdog/0-11 4,663 cpu_clk_unhalted.thread watchdog/1-14 4,663 cpu_clk_unhalted.thread watchdog/4-32 4,626 cpu_clk_unhalted.thread watchdog/5-38 4,403 cpu_clk_unhalted.thread watchdog/3-26 3,936 cpu_clk_unhalted.thread watchdog/2-20 3,850 cpu_clk_unhalted.thread kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread watchdog/6-44 2,017 cpu_clk_unhalted.thread watchdog/7-50 2,017 cpu_clk_unhalted.thread vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI thermald-2841 1,472,670 inst_retired.any # 0.8 CPI sshd-23111 977,374 inst_retired.any # 0.8 CPI perf-24163 495,037 inst_retired.any # 4.7 CPI gmain-2700 341,213 inst_retired.any # 1.1 CPI sshd-23058 148,891 inst_retired.any # 1.3 CPI rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI rcu_sched-8 14,474 inst_retired.any # 1.3 CPI kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI watchdog/0-11 1,479 inst_retired.any # 3.2 CPI watchdog/1-14 1,479 inst_retired.any # 3.2 CPI watchdog/2-20 1,479 inst_retired.any # 2.6 CPI watchdog/3-26 1,479 inst_retired.any # 2.7 CPI watchdog/4-32 1,479 inst_retired.any # 3.1 CPI watchdog/5-38 1,479 inst_retired.any # 3.0 CPI watchdog/6-44 1,479 inst_retired.any # 1.4 CPI watchdog/7-50 1,479 inst_retired.any # 1.4 CPI kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI perf-24163 2,302,323 cycles vmstat-23127 1,352,455 cycles thermald-2841 1,161,140 cycles sshd-23111 807,827 cycles gmain-2700 375,535 cycles sshd-23058 194,071 cycles kworker/u16:1-18249 114,306 cycles rtkit-daemon-3288 103,547 cycles kworker/0:2-19991 46,550 cycles rcu_sched-8 18,855 cycles rtkit-daemon-3289 17,549 cycles kworker/4:1-15354 8,812 cycles kworker/5:2-31362 6,812 cycles kworker/4:1H-1887 5,270 cycles kworker/6:0-17528 5,111 cycles kworker/7:2-23145 4,667 cycles watchdog/0-11 4,663 cycles watchdog/1-14 4,663 cycles watchdog/4-32 4,626 cycles watchdog/5-38 4,403 cycles watchdog/3-26 3,936 cycles watchdog/2-20 3,850 cycles kworker/u16:2-23146 2,654 cycles watchdog/6-44 2,017 cycles watchdog/7-50 2,017 cycles 2.175726600 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
id = buf[thread].id;
if (stat_config.stats)
perf stat: Resort '--per-thread' result There are many threads reported if we enable '--per-thread' globally. 1. Most of the threads are not counted or counting value 0. This patch removes these threads. 2. We also resort the threads in display according to the counting value. It's useful for user to see the hottest threads easily. For example, the new results would be: root@skl:/tmp# perf stat --per-thread ^C Performance counter stats for 'system wide': perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 10 context-switches # 0.117 M/sec kworker/u16:1-18249 7 context-switches # 0.056 M/sec sshd-23111 4 context-switches # 0.014 M/sec vmstat-23127 4 context-switches # 0.003 M/sec perf-24165 4 context-switches # 0.930 K/sec kworker/0:2-19991 3 context-switches # 0.022 M/sec kworker/u16:2-23146 3 context-switches # 0.039 M/sec kworker/4:1-15354 2 context-switches # 0.070 M/sec kworker/6:0-17528 2 context-switches # 0.084 M/sec sshd-23058 2 context-switches # 0.010 M/sec ksoftirqd/0-7 1 context-switches # 0.113 M/sec watchdog/0-11 1 context-switches # 0.092 M/sec watchdog/1-14 1 context-switches # 0.118 M/sec watchdog/2-20 1 context-switches # 0.412 M/sec watchdog/3-26 1 context-switches # 0.346 M/sec watchdog/4-32 1 context-switches # 0.362 M/sec watchdog/5-38 1 context-switches # 0.677 M/sec watchdog/6-44 1 context-switches # 0.671 M/sec watchdog/7-50 1 context-switches # 0.340 M/sec kworker/4:1H-1887 1 context-switches # 0.076 M/sec thermald-2841 1 context-switches # 0.004 M/sec gmain-2700 1 context-switches # 0.024 M/sec irqbalance-2780 1 context-switches # 0.001 M/sec kworker/3:2-12870 1 context-switches # 0.098 M/sec kworker/5:2-31362 1 context-switches # 0.086 M/sec kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec rcu_sched-8 1 cpu-migrations # 0.012 M/sec sshd-23058 1 cpu-migrations # 0.005 M/sec perf-24165 8,833,385 cycles # 2.053 GHz vmstat-23127 1,702,699 cycles # 1.090 GHz irqbalance-2780 739,847 cycles # 0.894 GHz sshd-23111 269,506 cycles # 0.968 GHz thermald-2841 204,556 cycles # 0.886 GHz sshd-23058 158,780 cycles # 0.766 GHz kworker/0:2-19991 112,981 cycles # 0.843 GHz kworker/u16:1-18249 100,926 cycles # 0.803 GHz rcu_sched-8 74,024 cycles # 0.865 GHz kworker/u16:2-23146 55,984 cycles # 0.726 GHz gmain-2700 34,278 cycles # 0.820 GHz kworker/4:1-15354 20,665 cycles # 0.728 GHz kworker/6:0-17528 16,445 cycles # 0.688 GHz kworker/5:2-31362 9,492 cycles # 0.816 GHz watchdog/3-26 8,695 cycles # 3.006 GHz kworker/4:1H-1887 8,238 cycles # 0.624 GHz watchdog/4-32 7,580 cycles # 2.747 GHz kworker/3:2-12870 7,306 cycles # 0.715 GHz watchdog/2-20 7,274 cycles # 2.995 GHz watchdog/0-11 6,988 cycles # 0.642 GHz ksoftirqd/0-7 6,376 cycles # 0.719 GHz watchdog/1-14 5,340 cycles # 0.630 GHz watchdog/5-38 4,061 cycles # 2.749 GHz watchdog/6-44 3,976 cycles # 2.667 GHz watchdog/7-50 3,418 cycles # 1.161 GHz vmstat-23127 2,511,699 instructions # 1.48 insn per cycle perf-24165 1,829,908 instructions # 0.21 insn per cycle irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle thermald-2841 143,544 instructions # 0.70 insn per cycle sshd-23111 128,138 instructions # 0.48 insn per cycle sshd-23058 57,654 instructions # 0.36 insn per cycle rcu_sched-8 44,063 instructions # 0.60 insn per cycle kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle gmain-2700 13,691 instructions # 0.40 insn per cycle kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle watchdog/0-11 3,135 instructions # 0.45 insn per cycle watchdog/1-14 3,135 instructions # 0.59 insn per cycle watchdog/2-20 3,135 instructions # 0.43 insn per cycle watchdog/3-26 3,135 instructions # 0.36 insn per cycle watchdog/4-32 3,135 instructions # 0.41 insn per cycle watchdog/5-38 3,135 instructions # 0.77 insn per cycle watchdog/6-44 3,135 instructions # 0.79 insn per cycle watchdog/7-50 3,135 instructions # 0.92 insn per cycle vmstat-23127 539,181 branches # 345.139 M/sec perf-24165 375,364 branches # 87.245 M/sec irqbalance-2780 262,092 branches # 316.593 M/sec thermald-2841 31,611 branches # 136.915 M/sec sshd-23111 21,874 branches # 78.596 M/sec sshd-23058 10,682 branches # 51.528 M/sec rcu_sched-8 8,693 branches # 101.633 M/sec kworker/u16:1-18249 7,891 branches # 62.808 M/sec kworker/0:2-19991 5,761 branches # 42.998 M/sec kworker/u16:2-23146 4,099 branches # 53.138 M/sec kworker/4:1-15354 2,755 branches # 97.110 M/sec gmain-2700 2,638 branches # 63.127 M/sec kworker/6:0-17528 2,216 branches # 92.739 M/sec kworker/5:2-31362 1,132 branches # 97.360 M/sec kworker/3:2-12870 1,081 branches # 105.773 M/sec kworker/4:1H-1887 725 branches # 54.887 M/sec ksoftirqd/0-7 707 branches # 79.716 M/sec watchdog/0-11 652 branches # 59.860 M/sec watchdog/1-14 652 branches # 76.923 M/sec watchdog/2-20 652 branches # 268.423 M/sec watchdog/3-26 652 branches # 225.372 M/sec watchdog/4-32 652 branches # 236.318 M/sec watchdog/5-38 652 branches # 441.435 M/sec watchdog/6-44 652 branches # 437.290 M/sec watchdog/7-50 652 branches # 221.467 M/sec vmstat-23127 8,960 branch-misses # 1.66% of all branches irqbalance-2780 3,047 branch-misses # 1.16% of all branches perf-24165 2,876 branch-misses # 0.77% of all branches sshd-23111 1,843 branch-misses # 8.43% of all branches thermald-2841 1,444 branch-misses # 4.57% of all branches sshd-23058 1,379 branch-misses # 12.91% of all branches kworker/u16:1-18249 982 branch-misses # 12.44% of all branches rcu_sched-8 893 branch-misses # 10.27% of all branches kworker/u16:2-23146 578 branch-misses # 14.10% of all branches kworker/0:2-19991 376 branch-misses # 6.53% of all branches gmain-2700 280 branch-misses # 10.61% of all branches kworker/6:0-17528 196 branch-misses # 8.84% of all branches kworker/4:1-15354 187 branch-misses # 6.79% of all branches kworker/5:2-31362 123 branch-misses # 10.87% of all branches watchdog/0-11 95 branch-misses # 14.57% of all branches watchdog/4-32 89 branch-misses # 13.65% of all branches kworker/3:2-12870 80 branch-misses # 7.40% of all branches watchdog/3-26 61 branch-misses # 9.36% of all branches kworker/4:1H-1887 60 branch-misses # 8.28% of all branches watchdog/2-20 52 branch-misses # 7.98% of all branches ksoftirqd/0-7 47 branch-misses # 6.65% of all branches watchdog/1-14 46 branch-misses # 7.06% of all branches watchdog/7-50 13 branch-misses # 1.99% of all branches watchdog/5-38 8 branch-misses # 1.23% of all branches watchdog/6-44 7 branch-misses # 1.07% of all branches 3.695150786 seconds time elapsed root@skl:/tmp# perf stat --per-thread -M IPC,CPI ^C Performance counter stats for 'system wide': vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC thermald-2841 1,472,670 inst_retired.any # 1.3 IPC sshd-23111 977,374 inst_retired.any # 1.2 IPC perf-24163 483,779 inst_retired.any # 0.2 IPC gmain-2700 341,213 inst_retired.any # 0.9 IPC sshd-23058 148,891 inst_retired.any # 0.8 IPC rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC rcu_sched-8 14,474 inst_retired.any # 0.8 IPC kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC watchdog/0-11 1,479 inst_retired.any # 0.3 IPC watchdog/1-14 1,479 inst_retired.any # 0.3 IPC watchdog/2-20 1,479 inst_retired.any # 0.4 IPC watchdog/3-26 1,479 inst_retired.any # 0.4 IPC watchdog/4-32 1,479 inst_retired.any # 0.3 IPC watchdog/5-38 1,479 inst_retired.any # 0.3 IPC watchdog/6-44 1,479 inst_retired.any # 0.7 IPC watchdog/7-50 1,479 inst_retired.any # 0.7 IPC kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC perf-24163 2,249,872 cpu_clk_unhalted.thread vmstat-23127 1,352,455 cpu_clk_unhalted.thread thermald-2841 1,161,140 cpu_clk_unhalted.thread sshd-23111 807,827 cpu_clk_unhalted.thread gmain-2700 375,535 cpu_clk_unhalted.thread sshd-23058 194,071 cpu_clk_unhalted.thread kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread kworker/0:2-19991 46,550 cpu_clk_unhalted.thread rcu_sched-8 18,855 cpu_clk_unhalted.thread rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread kworker/4:1-15354 8,812 cpu_clk_unhalted.thread kworker/5:2-31362 6,812 cpu_clk_unhalted.thread kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread kworker/6:0-17528 5,111 cpu_clk_unhalted.thread kworker/7:2-23145 4,667 cpu_clk_unhalted.thread watchdog/0-11 4,663 cpu_clk_unhalted.thread watchdog/1-14 4,663 cpu_clk_unhalted.thread watchdog/4-32 4,626 cpu_clk_unhalted.thread watchdog/5-38 4,403 cpu_clk_unhalted.thread watchdog/3-26 3,936 cpu_clk_unhalted.thread watchdog/2-20 3,850 cpu_clk_unhalted.thread kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread watchdog/6-44 2,017 cpu_clk_unhalted.thread watchdog/7-50 2,017 cpu_clk_unhalted.thread vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI thermald-2841 1,472,670 inst_retired.any # 0.8 CPI sshd-23111 977,374 inst_retired.any # 0.8 CPI perf-24163 495,037 inst_retired.any # 4.7 CPI gmain-2700 341,213 inst_retired.any # 1.1 CPI sshd-23058 148,891 inst_retired.any # 1.3 CPI rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI rcu_sched-8 14,474 inst_retired.any # 1.3 CPI kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI watchdog/0-11 1,479 inst_retired.any # 3.2 CPI watchdog/1-14 1,479 inst_retired.any # 3.2 CPI watchdog/2-20 1,479 inst_retired.any # 2.6 CPI watchdog/3-26 1,479 inst_retired.any # 2.7 CPI watchdog/4-32 1,479 inst_retired.any # 3.1 CPI watchdog/5-38 1,479 inst_retired.any # 3.0 CPI watchdog/6-44 1,479 inst_retired.any # 1.4 CPI watchdog/7-50 1,479 inst_retired.any # 1.4 CPI kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI perf-24163 2,302,323 cycles vmstat-23127 1,352,455 cycles thermald-2841 1,161,140 cycles sshd-23111 807,827 cycles gmain-2700 375,535 cycles sshd-23058 194,071 cycles kworker/u16:1-18249 114,306 cycles rtkit-daemon-3288 103,547 cycles kworker/0:2-19991 46,550 cycles rcu_sched-8 18,855 cycles rtkit-daemon-3289 17,549 cycles kworker/4:1-15354 8,812 cycles kworker/5:2-31362 6,812 cycles kworker/4:1H-1887 5,270 cycles kworker/6:0-17528 5,111 cycles kworker/7:2-23145 4,667 cycles watchdog/0-11 4,663 cycles watchdog/1-14 4,663 cycles watchdog/4-32 4,626 cycles watchdog/5-38 4,403 cycles watchdog/3-26 3,936 cycles watchdog/2-20 3,850 cycles kworker/u16:2-23146 2,654 cycles watchdog/6-44 2,017 cycles watchdog/7-50 2,017 cycles 2.175726600 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
printout(id, 0, buf[thread].counter, buf[thread].uval,
prefix, buf[thread].run, buf[thread].ena, 1.0,
&stat_config.stats[id]);
else
perf stat: Resort '--per-thread' result There are many threads reported if we enable '--per-thread' globally. 1. Most of the threads are not counted or counting value 0. This patch removes these threads. 2. We also resort the threads in display according to the counting value. It's useful for user to see the hottest threads easily. For example, the new results would be: root@skl:/tmp# perf stat --per-thread ^C Performance counter stats for 'system wide': perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 10 context-switches # 0.117 M/sec kworker/u16:1-18249 7 context-switches # 0.056 M/sec sshd-23111 4 context-switches # 0.014 M/sec vmstat-23127 4 context-switches # 0.003 M/sec perf-24165 4 context-switches # 0.930 K/sec kworker/0:2-19991 3 context-switches # 0.022 M/sec kworker/u16:2-23146 3 context-switches # 0.039 M/sec kworker/4:1-15354 2 context-switches # 0.070 M/sec kworker/6:0-17528 2 context-switches # 0.084 M/sec sshd-23058 2 context-switches # 0.010 M/sec ksoftirqd/0-7 1 context-switches # 0.113 M/sec watchdog/0-11 1 context-switches # 0.092 M/sec watchdog/1-14 1 context-switches # 0.118 M/sec watchdog/2-20 1 context-switches # 0.412 M/sec watchdog/3-26 1 context-switches # 0.346 M/sec watchdog/4-32 1 context-switches # 0.362 M/sec watchdog/5-38 1 context-switches # 0.677 M/sec watchdog/6-44 1 context-switches # 0.671 M/sec watchdog/7-50 1 context-switches # 0.340 M/sec kworker/4:1H-1887 1 context-switches # 0.076 M/sec thermald-2841 1 context-switches # 0.004 M/sec gmain-2700 1 context-switches # 0.024 M/sec irqbalance-2780 1 context-switches # 0.001 M/sec kworker/3:2-12870 1 context-switches # 0.098 M/sec kworker/5:2-31362 1 context-switches # 0.086 M/sec kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec rcu_sched-8 1 cpu-migrations # 0.012 M/sec sshd-23058 1 cpu-migrations # 0.005 M/sec perf-24165 8,833,385 cycles # 2.053 GHz vmstat-23127 1,702,699 cycles # 1.090 GHz irqbalance-2780 739,847 cycles # 0.894 GHz sshd-23111 269,506 cycles # 0.968 GHz thermald-2841 204,556 cycles # 0.886 GHz sshd-23058 158,780 cycles # 0.766 GHz kworker/0:2-19991 112,981 cycles # 0.843 GHz kworker/u16:1-18249 100,926 cycles # 0.803 GHz rcu_sched-8 74,024 cycles # 0.865 GHz kworker/u16:2-23146 55,984 cycles # 0.726 GHz gmain-2700 34,278 cycles # 0.820 GHz kworker/4:1-15354 20,665 cycles # 0.728 GHz kworker/6:0-17528 16,445 cycles # 0.688 GHz kworker/5:2-31362 9,492 cycles # 0.816 GHz watchdog/3-26 8,695 cycles # 3.006 GHz kworker/4:1H-1887 8,238 cycles # 0.624 GHz watchdog/4-32 7,580 cycles # 2.747 GHz kworker/3:2-12870 7,306 cycles # 0.715 GHz watchdog/2-20 7,274 cycles # 2.995 GHz watchdog/0-11 6,988 cycles # 0.642 GHz ksoftirqd/0-7 6,376 cycles # 0.719 GHz watchdog/1-14 5,340 cycles # 0.630 GHz watchdog/5-38 4,061 cycles # 2.749 GHz watchdog/6-44 3,976 cycles # 2.667 GHz watchdog/7-50 3,418 cycles # 1.161 GHz vmstat-23127 2,511,699 instructions # 1.48 insn per cycle perf-24165 1,829,908 instructions # 0.21 insn per cycle irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle thermald-2841 143,544 instructions # 0.70 insn per cycle sshd-23111 128,138 instructions # 0.48 insn per cycle sshd-23058 57,654 instructions # 0.36 insn per cycle rcu_sched-8 44,063 instructions # 0.60 insn per cycle kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle gmain-2700 13,691 instructions # 0.40 insn per cycle kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle watchdog/0-11 3,135 instructions # 0.45 insn per cycle watchdog/1-14 3,135 instructions # 0.59 insn per cycle watchdog/2-20 3,135 instructions # 0.43 insn per cycle watchdog/3-26 3,135 instructions # 0.36 insn per cycle watchdog/4-32 3,135 instructions # 0.41 insn per cycle watchdog/5-38 3,135 instructions # 0.77 insn per cycle watchdog/6-44 3,135 instructions # 0.79 insn per cycle watchdog/7-50 3,135 instructions # 0.92 insn per cycle vmstat-23127 539,181 branches # 345.139 M/sec perf-24165 375,364 branches # 87.245 M/sec irqbalance-2780 262,092 branches # 316.593 M/sec thermald-2841 31,611 branches # 136.915 M/sec sshd-23111 21,874 branches # 78.596 M/sec sshd-23058 10,682 branches # 51.528 M/sec rcu_sched-8 8,693 branches # 101.633 M/sec kworker/u16:1-18249 7,891 branches # 62.808 M/sec kworker/0:2-19991 5,761 branches # 42.998 M/sec kworker/u16:2-23146 4,099 branches # 53.138 M/sec kworker/4:1-15354 2,755 branches # 97.110 M/sec gmain-2700 2,638 branches # 63.127 M/sec kworker/6:0-17528 2,216 branches # 92.739 M/sec kworker/5:2-31362 1,132 branches # 97.360 M/sec kworker/3:2-12870 1,081 branches # 105.773 M/sec kworker/4:1H-1887 725 branches # 54.887 M/sec ksoftirqd/0-7 707 branches # 79.716 M/sec watchdog/0-11 652 branches # 59.860 M/sec watchdog/1-14 652 branches # 76.923 M/sec watchdog/2-20 652 branches # 268.423 M/sec watchdog/3-26 652 branches # 225.372 M/sec watchdog/4-32 652 branches # 236.318 M/sec watchdog/5-38 652 branches # 441.435 M/sec watchdog/6-44 652 branches # 437.290 M/sec watchdog/7-50 652 branches # 221.467 M/sec vmstat-23127 8,960 branch-misses # 1.66% of all branches irqbalance-2780 3,047 branch-misses # 1.16% of all branches perf-24165 2,876 branch-misses # 0.77% of all branches sshd-23111 1,843 branch-misses # 8.43% of all branches thermald-2841 1,444 branch-misses # 4.57% of all branches sshd-23058 1,379 branch-misses # 12.91% of all branches kworker/u16:1-18249 982 branch-misses # 12.44% of all branches rcu_sched-8 893 branch-misses # 10.27% of all branches kworker/u16:2-23146 578 branch-misses # 14.10% of all branches kworker/0:2-19991 376 branch-misses # 6.53% of all branches gmain-2700 280 branch-misses # 10.61% of all branches kworker/6:0-17528 196 branch-misses # 8.84% of all branches kworker/4:1-15354 187 branch-misses # 6.79% of all branches kworker/5:2-31362 123 branch-misses # 10.87% of all branches watchdog/0-11 95 branch-misses # 14.57% of all branches watchdog/4-32 89 branch-misses # 13.65% of all branches kworker/3:2-12870 80 branch-misses # 7.40% of all branches watchdog/3-26 61 branch-misses # 9.36% of all branches kworker/4:1H-1887 60 branch-misses # 8.28% of all branches watchdog/2-20 52 branch-misses # 7.98% of all branches ksoftirqd/0-7 47 branch-misses # 6.65% of all branches watchdog/1-14 46 branch-misses # 7.06% of all branches watchdog/7-50 13 branch-misses # 1.99% of all branches watchdog/5-38 8 branch-misses # 1.23% of all branches watchdog/6-44 7 branch-misses # 1.07% of all branches 3.695150786 seconds time elapsed root@skl:/tmp# perf stat --per-thread -M IPC,CPI ^C Performance counter stats for 'system wide': vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC thermald-2841 1,472,670 inst_retired.any # 1.3 IPC sshd-23111 977,374 inst_retired.any # 1.2 IPC perf-24163 483,779 inst_retired.any # 0.2 IPC gmain-2700 341,213 inst_retired.any # 0.9 IPC sshd-23058 148,891 inst_retired.any # 0.8 IPC rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC rcu_sched-8 14,474 inst_retired.any # 0.8 IPC kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC watchdog/0-11 1,479 inst_retired.any # 0.3 IPC watchdog/1-14 1,479 inst_retired.any # 0.3 IPC watchdog/2-20 1,479 inst_retired.any # 0.4 IPC watchdog/3-26 1,479 inst_retired.any # 0.4 IPC watchdog/4-32 1,479 inst_retired.any # 0.3 IPC watchdog/5-38 1,479 inst_retired.any # 0.3 IPC watchdog/6-44 1,479 inst_retired.any # 0.7 IPC watchdog/7-50 1,479 inst_retired.any # 0.7 IPC kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC perf-24163 2,249,872 cpu_clk_unhalted.thread vmstat-23127 1,352,455 cpu_clk_unhalted.thread thermald-2841 1,161,140 cpu_clk_unhalted.thread sshd-23111 807,827 cpu_clk_unhalted.thread gmain-2700 375,535 cpu_clk_unhalted.thread sshd-23058 194,071 cpu_clk_unhalted.thread kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread kworker/0:2-19991 46,550 cpu_clk_unhalted.thread rcu_sched-8 18,855 cpu_clk_unhalted.thread rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread kworker/4:1-15354 8,812 cpu_clk_unhalted.thread kworker/5:2-31362 6,812 cpu_clk_unhalted.thread kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread kworker/6:0-17528 5,111 cpu_clk_unhalted.thread kworker/7:2-23145 4,667 cpu_clk_unhalted.thread watchdog/0-11 4,663 cpu_clk_unhalted.thread watchdog/1-14 4,663 cpu_clk_unhalted.thread watchdog/4-32 4,626 cpu_clk_unhalted.thread watchdog/5-38 4,403 cpu_clk_unhalted.thread watchdog/3-26 3,936 cpu_clk_unhalted.thread watchdog/2-20 3,850 cpu_clk_unhalted.thread kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread watchdog/6-44 2,017 cpu_clk_unhalted.thread watchdog/7-50 2,017 cpu_clk_unhalted.thread vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI thermald-2841 1,472,670 inst_retired.any # 0.8 CPI sshd-23111 977,374 inst_retired.any # 0.8 CPI perf-24163 495,037 inst_retired.any # 4.7 CPI gmain-2700 341,213 inst_retired.any # 1.1 CPI sshd-23058 148,891 inst_retired.any # 1.3 CPI rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI rcu_sched-8 14,474 inst_retired.any # 1.3 CPI kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI watchdog/0-11 1,479 inst_retired.any # 3.2 CPI watchdog/1-14 1,479 inst_retired.any # 3.2 CPI watchdog/2-20 1,479 inst_retired.any # 2.6 CPI watchdog/3-26 1,479 inst_retired.any # 2.7 CPI watchdog/4-32 1,479 inst_retired.any # 3.1 CPI watchdog/5-38 1,479 inst_retired.any # 3.0 CPI watchdog/6-44 1,479 inst_retired.any # 1.4 CPI watchdog/7-50 1,479 inst_retired.any # 1.4 CPI kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI perf-24163 2,302,323 cycles vmstat-23127 1,352,455 cycles thermald-2841 1,161,140 cycles sshd-23111 807,827 cycles gmain-2700 375,535 cycles sshd-23058 194,071 cycles kworker/u16:1-18249 114,306 cycles rtkit-daemon-3288 103,547 cycles kworker/0:2-19991 46,550 cycles rcu_sched-8 18,855 cycles rtkit-daemon-3289 17,549 cycles kworker/4:1-15354 8,812 cycles kworker/5:2-31362 6,812 cycles kworker/4:1H-1887 5,270 cycles kworker/6:0-17528 5,111 cycles kworker/7:2-23145 4,667 cycles watchdog/0-11 4,663 cycles watchdog/1-14 4,663 cycles watchdog/4-32 4,626 cycles watchdog/5-38 4,403 cycles watchdog/3-26 3,936 cycles watchdog/2-20 3,850 cycles kworker/u16:2-23146 2,654 cycles watchdog/6-44 2,017 cycles watchdog/7-50 2,017 cycles 2.175726600 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
printout(id, 0, buf[thread].counter, buf[thread].uval,
prefix, buf[thread].run, buf[thread].ena, 1.0,
&rt_stat);
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
fputc('\n', output);
}
perf stat: Resort '--per-thread' result There are many threads reported if we enable '--per-thread' globally. 1. Most of the threads are not counted or counting value 0. This patch removes these threads. 2. We also resort the threads in display according to the counting value. It's useful for user to see the hottest threads easily. For example, the new results would be: root@skl:/tmp# perf stat --per-thread ^C Performance counter stats for 'system wide': perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized rcu_sched-8 10 context-switches # 0.117 M/sec kworker/u16:1-18249 7 context-switches # 0.056 M/sec sshd-23111 4 context-switches # 0.014 M/sec vmstat-23127 4 context-switches # 0.003 M/sec perf-24165 4 context-switches # 0.930 K/sec kworker/0:2-19991 3 context-switches # 0.022 M/sec kworker/u16:2-23146 3 context-switches # 0.039 M/sec kworker/4:1-15354 2 context-switches # 0.070 M/sec kworker/6:0-17528 2 context-switches # 0.084 M/sec sshd-23058 2 context-switches # 0.010 M/sec ksoftirqd/0-7 1 context-switches # 0.113 M/sec watchdog/0-11 1 context-switches # 0.092 M/sec watchdog/1-14 1 context-switches # 0.118 M/sec watchdog/2-20 1 context-switches # 0.412 M/sec watchdog/3-26 1 context-switches # 0.346 M/sec watchdog/4-32 1 context-switches # 0.362 M/sec watchdog/5-38 1 context-switches # 0.677 M/sec watchdog/6-44 1 context-switches # 0.671 M/sec watchdog/7-50 1 context-switches # 0.340 M/sec kworker/4:1H-1887 1 context-switches # 0.076 M/sec thermald-2841 1 context-switches # 0.004 M/sec gmain-2700 1 context-switches # 0.024 M/sec irqbalance-2780 1 context-switches # 0.001 M/sec kworker/3:2-12870 1 context-switches # 0.098 M/sec kworker/5:2-31362 1 context-switches # 0.086 M/sec kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec rcu_sched-8 1 cpu-migrations # 0.012 M/sec sshd-23058 1 cpu-migrations # 0.005 M/sec perf-24165 8,833,385 cycles # 2.053 GHz vmstat-23127 1,702,699 cycles # 1.090 GHz irqbalance-2780 739,847 cycles # 0.894 GHz sshd-23111 269,506 cycles # 0.968 GHz thermald-2841 204,556 cycles # 0.886 GHz sshd-23058 158,780 cycles # 0.766 GHz kworker/0:2-19991 112,981 cycles # 0.843 GHz kworker/u16:1-18249 100,926 cycles # 0.803 GHz rcu_sched-8 74,024 cycles # 0.865 GHz kworker/u16:2-23146 55,984 cycles # 0.726 GHz gmain-2700 34,278 cycles # 0.820 GHz kworker/4:1-15354 20,665 cycles # 0.728 GHz kworker/6:0-17528 16,445 cycles # 0.688 GHz kworker/5:2-31362 9,492 cycles # 0.816 GHz watchdog/3-26 8,695 cycles # 3.006 GHz kworker/4:1H-1887 8,238 cycles # 0.624 GHz watchdog/4-32 7,580 cycles # 2.747 GHz kworker/3:2-12870 7,306 cycles # 0.715 GHz watchdog/2-20 7,274 cycles # 2.995 GHz watchdog/0-11 6,988 cycles # 0.642 GHz ksoftirqd/0-7 6,376 cycles # 0.719 GHz watchdog/1-14 5,340 cycles # 0.630 GHz watchdog/5-38 4,061 cycles # 2.749 GHz watchdog/6-44 3,976 cycles # 2.667 GHz watchdog/7-50 3,418 cycles # 1.161 GHz vmstat-23127 2,511,699 instructions # 1.48 insn per cycle perf-24165 1,829,908 instructions # 0.21 insn per cycle irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle thermald-2841 143,544 instructions # 0.70 insn per cycle sshd-23111 128,138 instructions # 0.48 insn per cycle sshd-23058 57,654 instructions # 0.36 insn per cycle rcu_sched-8 44,063 instructions # 0.60 insn per cycle kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle gmain-2700 13,691 instructions # 0.40 insn per cycle kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle watchdog/0-11 3,135 instructions # 0.45 insn per cycle watchdog/1-14 3,135 instructions # 0.59 insn per cycle watchdog/2-20 3,135 instructions # 0.43 insn per cycle watchdog/3-26 3,135 instructions # 0.36 insn per cycle watchdog/4-32 3,135 instructions # 0.41 insn per cycle watchdog/5-38 3,135 instructions # 0.77 insn per cycle watchdog/6-44 3,135 instructions # 0.79 insn per cycle watchdog/7-50 3,135 instructions # 0.92 insn per cycle vmstat-23127 539,181 branches # 345.139 M/sec perf-24165 375,364 branches # 87.245 M/sec irqbalance-2780 262,092 branches # 316.593 M/sec thermald-2841 31,611 branches # 136.915 M/sec sshd-23111 21,874 branches # 78.596 M/sec sshd-23058 10,682 branches # 51.528 M/sec rcu_sched-8 8,693 branches # 101.633 M/sec kworker/u16:1-18249 7,891 branches # 62.808 M/sec kworker/0:2-19991 5,761 branches # 42.998 M/sec kworker/u16:2-23146 4,099 branches # 53.138 M/sec kworker/4:1-15354 2,755 branches # 97.110 M/sec gmain-2700 2,638 branches # 63.127 M/sec kworker/6:0-17528 2,216 branches # 92.739 M/sec kworker/5:2-31362 1,132 branches # 97.360 M/sec kworker/3:2-12870 1,081 branches # 105.773 M/sec kworker/4:1H-1887 725 branches # 54.887 M/sec ksoftirqd/0-7 707 branches # 79.716 M/sec watchdog/0-11 652 branches # 59.860 M/sec watchdog/1-14 652 branches # 76.923 M/sec watchdog/2-20 652 branches # 268.423 M/sec watchdog/3-26 652 branches # 225.372 M/sec watchdog/4-32 652 branches # 236.318 M/sec watchdog/5-38 652 branches # 441.435 M/sec watchdog/6-44 652 branches # 437.290 M/sec watchdog/7-50 652 branches # 221.467 M/sec vmstat-23127 8,960 branch-misses # 1.66% of all branches irqbalance-2780 3,047 branch-misses # 1.16% of all branches perf-24165 2,876 branch-misses # 0.77% of all branches sshd-23111 1,843 branch-misses # 8.43% of all branches thermald-2841 1,444 branch-misses # 4.57% of all branches sshd-23058 1,379 branch-misses # 12.91% of all branches kworker/u16:1-18249 982 branch-misses # 12.44% of all branches rcu_sched-8 893 branch-misses # 10.27% of all branches kworker/u16:2-23146 578 branch-misses # 14.10% of all branches kworker/0:2-19991 376 branch-misses # 6.53% of all branches gmain-2700 280 branch-misses # 10.61% of all branches kworker/6:0-17528 196 branch-misses # 8.84% of all branches kworker/4:1-15354 187 branch-misses # 6.79% of all branches kworker/5:2-31362 123 branch-misses # 10.87% of all branches watchdog/0-11 95 branch-misses # 14.57% of all branches watchdog/4-32 89 branch-misses # 13.65% of all branches kworker/3:2-12870 80 branch-misses # 7.40% of all branches watchdog/3-26 61 branch-misses # 9.36% of all branches kworker/4:1H-1887 60 branch-misses # 8.28% of all branches watchdog/2-20 52 branch-misses # 7.98% of all branches ksoftirqd/0-7 47 branch-misses # 6.65% of all branches watchdog/1-14 46 branch-misses # 7.06% of all branches watchdog/7-50 13 branch-misses # 1.99% of all branches watchdog/5-38 8 branch-misses # 1.23% of all branches watchdog/6-44 7 branch-misses # 1.07% of all branches 3.695150786 seconds time elapsed root@skl:/tmp# perf stat --per-thread -M IPC,CPI ^C Performance counter stats for 'system wide': vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC thermald-2841 1,472,670 inst_retired.any # 1.3 IPC sshd-23111 977,374 inst_retired.any # 1.2 IPC perf-24163 483,779 inst_retired.any # 0.2 IPC gmain-2700 341,213 inst_retired.any # 0.9 IPC sshd-23058 148,891 inst_retired.any # 0.8 IPC rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC rcu_sched-8 14,474 inst_retired.any # 0.8 IPC kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC watchdog/0-11 1,479 inst_retired.any # 0.3 IPC watchdog/1-14 1,479 inst_retired.any # 0.3 IPC watchdog/2-20 1,479 inst_retired.any # 0.4 IPC watchdog/3-26 1,479 inst_retired.any # 0.4 IPC watchdog/4-32 1,479 inst_retired.any # 0.3 IPC watchdog/5-38 1,479 inst_retired.any # 0.3 IPC watchdog/6-44 1,479 inst_retired.any # 0.7 IPC watchdog/7-50 1,479 inst_retired.any # 0.7 IPC kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC perf-24163 2,249,872 cpu_clk_unhalted.thread vmstat-23127 1,352,455 cpu_clk_unhalted.thread thermald-2841 1,161,140 cpu_clk_unhalted.thread sshd-23111 807,827 cpu_clk_unhalted.thread gmain-2700 375,535 cpu_clk_unhalted.thread sshd-23058 194,071 cpu_clk_unhalted.thread kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread kworker/0:2-19991 46,550 cpu_clk_unhalted.thread rcu_sched-8 18,855 cpu_clk_unhalted.thread rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread kworker/4:1-15354 8,812 cpu_clk_unhalted.thread kworker/5:2-31362 6,812 cpu_clk_unhalted.thread kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread kworker/6:0-17528 5,111 cpu_clk_unhalted.thread kworker/7:2-23145 4,667 cpu_clk_unhalted.thread watchdog/0-11 4,663 cpu_clk_unhalted.thread watchdog/1-14 4,663 cpu_clk_unhalted.thread watchdog/4-32 4,626 cpu_clk_unhalted.thread watchdog/5-38 4,403 cpu_clk_unhalted.thread watchdog/3-26 3,936 cpu_clk_unhalted.thread watchdog/2-20 3,850 cpu_clk_unhalted.thread kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread watchdog/6-44 2,017 cpu_clk_unhalted.thread watchdog/7-50 2,017 cpu_clk_unhalted.thread vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI thermald-2841 1,472,670 inst_retired.any # 0.8 CPI sshd-23111 977,374 inst_retired.any # 0.8 CPI perf-24163 495,037 inst_retired.any # 4.7 CPI gmain-2700 341,213 inst_retired.any # 1.1 CPI sshd-23058 148,891 inst_retired.any # 1.3 CPI rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI rcu_sched-8 14,474 inst_retired.any # 1.3 CPI kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI watchdog/0-11 1,479 inst_retired.any # 3.2 CPI watchdog/1-14 1,479 inst_retired.any # 3.2 CPI watchdog/2-20 1,479 inst_retired.any # 2.6 CPI watchdog/3-26 1,479 inst_retired.any # 2.7 CPI watchdog/4-32 1,479 inst_retired.any # 3.1 CPI watchdog/5-38 1,479 inst_retired.any # 3.0 CPI watchdog/6-44 1,479 inst_retired.any # 1.4 CPI watchdog/7-50 1,479 inst_retired.any # 1.4 CPI kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI perf-24163 2,302,323 cycles vmstat-23127 1,352,455 cycles thermald-2841 1,161,140 cycles sshd-23111 807,827 cycles gmain-2700 375,535 cycles sshd-23058 194,071 cycles kworker/u16:1-18249 114,306 cycles rtkit-daemon-3288 103,547 cycles kworker/0:2-19991 46,550 cycles rcu_sched-8 18,855 cycles rtkit-daemon-3289 17,549 cycles kworker/4:1-15354 8,812 cycles kworker/5:2-31362 6,812 cycles kworker/4:1H-1887 5,270 cycles kworker/6:0-17528 5,111 cycles kworker/7:2-23145 4,667 cycles watchdog/0-11 4,663 cycles watchdog/1-14 4,663 cycles watchdog/4-32 4,626 cycles watchdog/5-38 4,403 cycles watchdog/3-26 3,936 cycles watchdog/2-20 3,850 cycles kworker/u16:2-23146 2,654 cycles watchdog/6-44 2,017 cycles watchdog/7-50 2,017 cycles 2.175726600 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
free(buf);
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
}
struct caggr_data {
double avg, avg_enabled, avg_running;
};
static void counter_aggr_cb(struct perf_evsel *counter, void *data,
bool first __maybe_unused)
{
struct caggr_data *cd = data;
struct perf_stat_evsel *ps = counter->stats;
cd->avg += avg_stats(&ps->res_stats[0]);
cd->avg_enabled += avg_stats(&ps->res_stats[1]);
cd->avg_running += avg_stats(&ps->res_stats[2]);
}
/*
* Print out the results of a single counter:
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
* aggregated counts in system-wide mode
*/
2013-01-29 11:47:44 +00:00
static void print_counter_aggr(struct perf_evsel *counter, char *prefix)
{
FILE *output = stat_config.output;
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
double uval;
struct caggr_data cd = { .avg = 0.0 };
perf stat: Collapse identically named events The uncore PMU has a lot of duplicated PMUs for different subsystems. When expanding an uncore alias we usually end up with a large number of identically named aliases, which makes perf stat output difficult to read. Automatically sum them up in perf stat, unless --no-merge is specified. This can be default because only the uncores generally have duplicated aliases. Other PMUs have unique names. Before: % perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 694,976 Bytes unc_c_llc_lookup.any 706,304 Bytes unc_c_llc_lookup.any 956,608 Bytes unc_c_llc_lookup.any 782,720 Bytes unc_c_llc_lookup.any 605,696 Bytes unc_c_llc_lookup.any 442,816 Bytes unc_c_llc_lookup.any 659,328 Bytes unc_c_llc_lookup.any 509,312 Bytes unc_c_llc_lookup.any 263,936 Bytes unc_c_llc_lookup.any 592,448 Bytes unc_c_llc_lookup.any 672,448 Bytes unc_c_llc_lookup.any 608,640 Bytes unc_c_llc_lookup.any 641,024 Bytes unc_c_llc_lookup.any 856,896 Bytes unc_c_llc_lookup.any 808,832 Bytes unc_c_llc_lookup.any 684,864 Bytes unc_c_llc_lookup.any 710,464 Bytes unc_c_llc_lookup.any 538,304 Bytes unc_c_llc_lookup.any 1.002577660 seconds time elapsed After: % perf stat -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 2,685,120 Bytes unc_c_llc_lookup.any 1.002648032 seconds time elapsed v2: Split collect_aliases. Rename alias flag. v3: Make sure unsupported/not counted is always printed. v4: Factor out callback change into separate patch. v5: Move check for bad results here Move merged check into collect_data Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
if (!collect_data(counter, counter_aggr_cb, &cd))
return;
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (prefix && !metric_only)
2013-01-29 11:47:44 +00:00
fprintf(output, "%s", prefix);
uval = cd.avg * counter->scale;
printout(-1, 0, counter, uval, prefix, cd.avg_running, cd.avg_enabled,
cd.avg, &rt_stat);
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (!metric_only)
fprintf(output, "\n");
}
static void counter_cb(struct perf_evsel *counter, void *data,
bool first __maybe_unused)
{
struct aggr_data *ad = data;
ad->val += perf_counts(counter->counts, ad->cpu, 0)->val;
ad->ena += perf_counts(counter->counts, ad->cpu, 0)->ena;
ad->run += perf_counts(counter->counts, ad->cpu, 0)->run;
}
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
/*
* Print out the results of a single counter:
* does not use aggregated count in system-wide
*/
2013-01-29 11:47:44 +00:00
static void print_counter(struct perf_evsel *counter, char *prefix)
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
{
FILE *output = stat_config.output;
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
u64 ena, run, val;
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
double uval;
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
int cpu;
for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
struct aggr_data ad = { .cpu = cpu };
perf stat: Collapse identically named events The uncore PMU has a lot of duplicated PMUs for different subsystems. When expanding an uncore alias we usually end up with a large number of identically named aliases, which makes perf stat output difficult to read. Automatically sum them up in perf stat, unless --no-merge is specified. This can be default because only the uncores generally have duplicated aliases. Other PMUs have unique names. Before: % perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 694,976 Bytes unc_c_llc_lookup.any 706,304 Bytes unc_c_llc_lookup.any 956,608 Bytes unc_c_llc_lookup.any 782,720 Bytes unc_c_llc_lookup.any 605,696 Bytes unc_c_llc_lookup.any 442,816 Bytes unc_c_llc_lookup.any 659,328 Bytes unc_c_llc_lookup.any 509,312 Bytes unc_c_llc_lookup.any 263,936 Bytes unc_c_llc_lookup.any 592,448 Bytes unc_c_llc_lookup.any 672,448 Bytes unc_c_llc_lookup.any 608,640 Bytes unc_c_llc_lookup.any 641,024 Bytes unc_c_llc_lookup.any 856,896 Bytes unc_c_llc_lookup.any 808,832 Bytes unc_c_llc_lookup.any 684,864 Bytes unc_c_llc_lookup.any 710,464 Bytes unc_c_llc_lookup.any 538,304 Bytes unc_c_llc_lookup.any 1.002577660 seconds time elapsed After: % perf stat -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 2,685,120 Bytes unc_c_llc_lookup.any 1.002648032 seconds time elapsed v2: Split collect_aliases. Rename alias flag. v3: Make sure unsupported/not counted is always printed. v4: Factor out callback change into separate patch. v5: Move check for bad results here Move merged check into collect_data Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
if (!collect_data(counter, counter_cb, &ad))
return;
val = ad.val;
ena = ad.ena;
run = ad.run;
2013-01-29 11:47:44 +00:00
if (prefix)
fprintf(output, "%s", prefix);
tools/perf/stat: Add event unit and scale support This patch adds perf stat support for handling event units and scales as exported by the kernel. The kernel can export PMU events actual unit and scaling factor via sysfs: $ ls -1 /sys/devices/power/events/energy-* /sys/devices/power/events/energy-cores /sys/devices/power/events/energy-cores.scale /sys/devices/power/events/energy-cores.unit /sys/devices/power/events/energy-pkg /sys/devices/power/events/energy-pkg.scale /sys/devices/power/events/energy-pkg.unit $ cat /sys/devices/power/events/energy-cores.scale 2.3283064365386962890625e-10 $ cat cat /sys/devices/power/events/energy-cores.unit Joules This patch modifies the pmu event alias code to check for the presence of the .unit and .scale files to load the corresponding values. They are then used by perf stat transparently: # perf stat -a -e power/energy-pkg/,power/energy-cores/,cycles -I 1000 sleep 1000 # time counts unit events 1.000214717 3.07 Joules power/energy-pkg/ [100.00%] 1.000214717 0.53 Joules power/energy-cores/ 1.000214717 12965028 cycles [100.00%] 2.000749289 3.01 Joules power/energy-pkg/ 2.000749289 0.52 Joules power/energy-cores/ 2.000749289 15817043 cycles When the event does not have an explicit unit exported by the kernel, nothing is printed. In csv output mode, there will be an empty field. Special thanks to Jiri for providing the supporting code in the parser to trigger reading of the scale and unit files. Signed-off-by: Stephane Eranian <eranian@google.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: bp@alien8.de Cc: maria.n.dimakopoulou@gmail.com Cc: acme@redhat.com Link: http://lkml.kernel.org/r/1384275531-10892-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-12 16:58:49 +00:00
uval = val * counter->scale;
printout(cpu, 0, counter, uval, prefix, run, ena, 1.0,
&rt_stat);
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
fputc('\n', output);
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
}
}
static void print_no_aggr_metric(char *prefix)
{
int cpu;
int nrcpus = 0;
struct perf_evsel *counter;
u64 ena, run, val;
double uval;
nrcpus = evsel_list->cpus->nr;
for (cpu = 0; cpu < nrcpus; cpu++) {
bool first = true;
if (prefix)
fputs(prefix, stat_config.output);
evlist__for_each_entry(evsel_list, counter) {
if (is_duration_time(counter))
continue;
if (first) {
aggr_printout(counter, cpu, 0);
first = false;
}
val = perf_counts(counter->counts, cpu, 0)->val;
ena = perf_counts(counter->counts, cpu, 0)->ena;
run = perf_counts(counter->counts, cpu, 0)->run;
uval = val * counter->scale;
printout(cpu, 0, counter, uval, prefix, run, ena, 1.0,
&rt_stat);
}
fputc('\n', stat_config.output);
}
}
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
static int aggr_header_lens[] = {
[AGGR_CORE] = 18,
[AGGR_SOCKET] = 12,
[AGGR_NONE] = 6,
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
[AGGR_THREAD] = 24,
[AGGR_GLOBAL] = 0,
};
static const char *aggr_header_csv[] = {
[AGGR_CORE] = "core,cpus,",
[AGGR_SOCKET] = "socket,cpus",
[AGGR_NONE] = "cpu,",
[AGGR_THREAD] = "comm-pid,",
[AGGR_GLOBAL] = ""
};
static void print_metric_headers(const char *prefix, bool no_indent)
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
{
struct perf_stat_output_ctx out;
struct perf_evsel *counter;
struct outstate os = {
.fh = stat_config.output
};
if (prefix)
fprintf(stat_config.output, "%s", prefix);
if (!csv_output && !no_indent)
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
fprintf(stat_config.output, "%*s",
aggr_header_lens[stat_config.aggr_mode], "");
if (csv_output) {
if (stat_config.interval)
fputs("time,", stat_config.output);
fputs(aggr_header_csv[stat_config.aggr_mode],
stat_config.output);
}
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
/* Print metrics headers only */
evlist__for_each_entry(evsel_list, counter) {
if (is_duration_time(counter))
continue;
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
os.evsel = counter;
out.ctx = &os;
out.print_metric = print_metric_header;
out.new_line = new_line_metric;
perf stat: Output JSON MetricExpr metric Add generic infrastructure to perf stat to output ratios for "MetricExpr" entries in the event lists. Many events are more useful as ratios than in raw form, typically some count in relation to total ticks. Transfer the MetricExpr information from the alias to the evsel. We mark the events that need to be collected for MetricExpr, and also link the events using them with a pointer. The code is careful to always prefer the right event in the same group to minimize multiplexing errors. At the moment only a single relation is supported. Then add a rblist to the stat shadow code that remembers stats based on the cpu and context. Then finally update and retrieve and print these values similarly to the existing hardcoded perf metrics. We use the simple expression parser added earlier to evaluate the expression. Normally we just output the result without further commentary, but for --metric-only this would lead to empty columns. So for this case use the original event as description. There is no attempt to automatically add the MetricExpr event, if it is missing, however we suggest it to the user, because the user tool doesn't have enough information to reliably construct a group that is guaranteed to schedule. So we leave that to the user. % perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' 1.000147889 800,085,181 unc_p_clockticks 1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6 2.000448381 800,218,217 unc_p_clockticks 2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8 3.000639852 800,243,057 unc_p_clockticks 3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3 % perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only # time freq_max_os_cycles % 1.000127077 0.9 2.000301436 0.7 3.000456379 0.0 v2: Change from DivideBy to MetricExpr v3: Use expr__ prefix. Support more than one other event. v4: Update description v5: Only print warning message once for multiple PMUs. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:08 +00:00
out.force_header = true;
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
os.evsel = counter;
perf_stat__print_shadow_stats(counter, 0,
0,
perf stat: Support JSON metrics in perf stat Add generic support for standalone metrics specified in JSON files to perf stat. A metric is a formula that uses multiple events to compute a higher level result (e.g. IPC). Previously metrics were always tied to an event and automatically enabled with that event. But now change it that we can have standalone metrics. They are in the same JSON data structure as events, but don't have an event name. We also allow to organize the metrics in metric groups, which allows a short cut to select several related metrics at once. Add a new -M / --metrics option to perf stat that adds the metrics or metric groups specified. Add the core code to manage and parse the metric groups. They are collected from the JSON data structures into a separate rblist. When computing shadow values look for metrics in that list. Then they are computed using the existing saved values infrastructure in stat-shadow.c The actual JSON metrics are in a separate pull request. % perf stat -M Summary --metric-only -a sleep 1 Performance counter stats for 'system wide': Instructions CLKS CPU_Utilization GFLOPs SMT_2T_Utilization Kernel_Utilization 317614222.0 1392930775.0 0.0 0.0 0.2 0.1 1.001497549 seconds time elapsed % perf stat -M GFLOPs flops Performance counter stats for 'flops': 3,999,541,471 fp_comp_ops_exe.sse_scalar_single # 1.2 GFLOPs (66.65%) 14 fp_comp_ops_exe.sse_scalar_double (66.65%) 0 fp_comp_ops_exe.sse_packed_double (66.67%) 0 fp_comp_ops_exe.sse_packed_single (66.70%) 0 simd_fp_256.packed_double (66.70%) 0 simd_fp_256.packed_single (66.67%) 0 duration_time 3.238372845 seconds time elapsed v2: Add missing header file v3: Move find_map to pmu.c Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170831194036.30146-7-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:31 +00:00
&out,
&metric_events,
&rt_stat);
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
}
fputc('\n', stat_config.output);
}
static void print_interval(char *prefix, struct timespec *ts)
{
FILE *output = stat_config.output;
static int num_print_interval;
sprintf(prefix, "%6lu.%09lu%s", ts->tv_sec, ts->tv_nsec, csv_sep);
if (num_print_interval == 0 && !csv_output) {
switch (stat_config.aggr_mode) {
case AGGR_SOCKET:
fprintf(output, "# time socket cpus");
if (!metric_only)
fprintf(output, " counts %*s events\n", unit_width, "unit");
break;
case AGGR_CORE:
fprintf(output, "# time core cpus");
if (!metric_only)
fprintf(output, " counts %*s events\n", unit_width, "unit");
break;
case AGGR_NONE:
fprintf(output, "# time CPU");
if (!metric_only)
fprintf(output, " counts %*s events\n", unit_width, "unit");
break;
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
case AGGR_THREAD:
fprintf(output, "# time comm-pid");
if (!metric_only)
fprintf(output, " counts %*s events\n", unit_width, "unit");
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
break;
case AGGR_GLOBAL:
default:
fprintf(output, "# time");
if (!metric_only)
fprintf(output, " counts %*s events\n", unit_width, "unit");
case AGGR_UNSET:
break;
}
}
if (num_print_interval == 0 && metric_only)
print_metric_headers(" ", true);
if (++num_print_interval == 25)
num_print_interval = 0;
}
static void print_header(int argc, const char **argv)
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
{
FILE *output = stat_config.output;
int i;
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
fflush(stdout);
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
if (!csv_output) {
fprintf(output, "\n");
fprintf(output, " Performance counter stats for ");
if (target.system_wide)
fprintf(output, "\'system wide");
else if (target.cpu_list)
fprintf(output, "\'CPU(s) %s", target.cpu_list);
else if (!target__has_task(&target)) {
fprintf(output, "\'%s", argv ? argv[0] : "pipe");
for (i = 1; argv && (i < argc); i++)
fprintf(output, " %s", argv[i]);
} else if (target.pid)
fprintf(output, "process id \'%s", target.pid);
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
else
fprintf(output, "thread id \'%s", target.tid);
fprintf(output, "\'");
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
if (run_count > 1)
fprintf(output, " (%d runs)", run_count);
fprintf(output, ":\n\n");
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
}
}
static void print_footer(void)
{
FILE *output = stat_config.output;
int n;
if (!null_run)
fprintf(output, "\n");
fprintf(output, " %17.9f seconds time elapsed",
avg_stats(&walltime_nsecs_stats) / NSEC_PER_SEC);
if (run_count > 1) {
fprintf(output, " ");
print_noise_pct(stddev_stats(&walltime_nsecs_stats),
avg_stats(&walltime_nsecs_stats));
}
fprintf(output, "\n\n");
perf stat: Issue a HW watchdog disable hint When using perf stat on an AMD F15h system with the default hw events attributes, some of the events don't get counted: Performance counter stats for 'sleep 1': 0.749208 task-clock (msec) # 0.001 CPUs utilized 1 context-switches # 0.001 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.072 M/sec 1,122,815 cycles # 1.499 GHz 286,740 stalled-cycles-frontend # 25.54% frontend cycles idle <not counted> stalled-cycles-backend (0.00%) ^^^^^^^^^^^^ <not counted> instructions (0.00%) ^^^^^^^^^^^^ <not counted> branches (0.00%) <not counted> branch-misses (0.00%) 1.001550070 seconds time elapsed The reason is that we have the HW watchdog consuming one PMU counter and when perf tries to schedule 6 events on 6 counters and some of those counters are constrained to only a specific subset of PMCs by the hardware, the event scheduling fails. So issue a hint to disable the HW watchdog around a perf stat session. Committer note: Testing it... # perf stat -d usleep 1 Performance counter stats for 'usleep 1': 1.180203 task-clock (msec) # 0.490 CPUs utilized 1 context-switches # 0.847 K/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.046 M/sec 184,754 cycles # 0.157 GHz 714,553 instructions # 3.87 insn per cycle 154,661 branches # 131.046 M/sec 7,247 branch-misses # 4.69% of all branches 219,984 L1-dcache-loads # 186.395 M/sec 17,600 L1-dcache-load-misses # 8.00% of all L1-dcache hits (90.16%) <not counted> LLC-loads (0.00%) <not counted> LLC-load-misses (0.00%) 0.002406823 seconds time elapsed Some events weren't counted. Try disabling the NMI watchdog: echo 0 > /proc/sys/kernel/nmi_watchdog perf stat ... echo 1 > /proc/sys/kernel/nmi_watchdog # Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Ingo Molnar <mingo@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <rric@kernel.org> Cc: Vince Weaver <vince@deater.net> Link: http://lkml.kernel.org/r/20170211183218.ijnvb5f7ciyuunx4@pd.tnic Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-07 00:40:05 +00:00
if (print_free_counters_hint &&
sysctl__read_int("kernel/nmi_watchdog", &n) >= 0 &&
n > 0)
perf stat: Issue a HW watchdog disable hint When using perf stat on an AMD F15h system with the default hw events attributes, some of the events don't get counted: Performance counter stats for 'sleep 1': 0.749208 task-clock (msec) # 0.001 CPUs utilized 1 context-switches # 0.001 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.072 M/sec 1,122,815 cycles # 1.499 GHz 286,740 stalled-cycles-frontend # 25.54% frontend cycles idle <not counted> stalled-cycles-backend (0.00%) ^^^^^^^^^^^^ <not counted> instructions (0.00%) ^^^^^^^^^^^^ <not counted> branches (0.00%) <not counted> branch-misses (0.00%) 1.001550070 seconds time elapsed The reason is that we have the HW watchdog consuming one PMU counter and when perf tries to schedule 6 events on 6 counters and some of those counters are constrained to only a specific subset of PMCs by the hardware, the event scheduling fails. So issue a hint to disable the HW watchdog around a perf stat session. Committer note: Testing it... # perf stat -d usleep 1 Performance counter stats for 'usleep 1': 1.180203 task-clock (msec) # 0.490 CPUs utilized 1 context-switches # 0.847 K/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.046 M/sec 184,754 cycles # 0.157 GHz 714,553 instructions # 3.87 insn per cycle 154,661 branches # 131.046 M/sec 7,247 branch-misses # 4.69% of all branches 219,984 L1-dcache-loads # 186.395 M/sec 17,600 L1-dcache-load-misses # 8.00% of all L1-dcache hits (90.16%) <not counted> LLC-loads (0.00%) <not counted> LLC-load-misses (0.00%) 0.002406823 seconds time elapsed Some events weren't counted. Try disabling the NMI watchdog: echo 0 > /proc/sys/kernel/nmi_watchdog perf stat ... echo 1 > /proc/sys/kernel/nmi_watchdog # Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Ingo Molnar <mingo@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <rric@kernel.org> Cc: Vince Weaver <vince@deater.net> Link: http://lkml.kernel.org/r/20170211183218.ijnvb5f7ciyuunx4@pd.tnic Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-07 00:40:05 +00:00
fprintf(output,
"Some events weren't counted. Try disabling the NMI watchdog:\n"
" echo 0 > /proc/sys/kernel/nmi_watchdog\n"
" perf stat ...\n"
" echo 1 > /proc/sys/kernel/nmi_watchdog\n");
}
static void print_counters(struct timespec *ts, int argc, const char **argv)
{
int interval = stat_config.interval;
struct perf_evsel *counter;
char buf[64], *prefix = NULL;
/* Do not print anything if we record to the pipe. */
if (STAT_RECORD && perf_stat.data.is_pipe)
return;
if (interval)
print_interval(prefix = buf, ts);
else
print_header(argc, argv);
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (metric_only) {
static int num_print_iv;
if (num_print_iv == 0 && !interval)
print_metric_headers(prefix, false);
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (num_print_iv++ == 25)
num_print_iv = 0;
if (stat_config.aggr_mode == AGGR_GLOBAL && prefix)
fprintf(stat_config.output, "%s", prefix);
}
switch (stat_config.aggr_mode) {
case AGGR_CORE:
case AGGR_SOCKET:
print_aggr(prefix);
break;
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
case AGGR_THREAD:
evlist__for_each_entry(evsel_list, counter) {
if (is_duration_time(counter))
continue;
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
print_aggr_thread(counter, prefix);
}
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
break;
case AGGR_GLOBAL:
evlist__for_each_entry(evsel_list, counter) {
if (is_duration_time(counter))
continue;
print_counter_aggr(counter, prefix);
}
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (metric_only)
fputc('\n', stat_config.output);
break;
case AGGR_NONE:
if (metric_only)
print_no_aggr_metric(prefix);
else {
evlist__for_each_entry(evsel_list, counter) {
if (is_duration_time(counter))
continue;
print_counter(counter, prefix);
}
}
break;
case AGGR_UNSET:
default:
break;
perf stat: Add no-aggregation mode to -a This patch adds a new -A option to perf stat. If specified then perf stat does not aggregate counts across all monitored CPUs in system-wide mode, i.e., when using -a. This option is not supported in per-thread mode. Being able to get a per-cpu breakdown is useful to detect imbalances between CPUs when running a uniform workload than spans all monitored CPUs. The second version corrects the missing cpumap[] support, so that it works when the -C option is used. The third version fixes a missing cpumap[] in print_counter() and removes a stray patch in builtin-trace.c. Examples on a 4-way system: # perf stat -a -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': 9592808135 cycles 3490380006 instructions # 0.364 IPC 1.001584632 seconds time elapsed # perf stat -a -A -e cycles,instructions -- sleep 1 Performance counter stats for 'sleep 1': CPU0 2398163767 cycles CPU1 2398180817 cycles CPU2 2398217115 cycles CPU3 2398247483 cycles CPU0 872282046 instructions # 0.364 IPC CPU1 873481776 instructions # 0.364 IPC CPU2 872638127 instructions # 0.364 IPC CPU3 872437789 instructions # 0.364 IPC 1.001556052 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4ce257b5.1e07e30a.7b6b.3aa9@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-11-16 09:05:01 +00:00
}
if (!interval && !csv_output)
print_footer();
fflush(stat_config.output);
}
static volatile int signr = -1;
static void skip_signal(int signo)
{
if ((child_pid == -1) || stat_config.interval)
done = 1;
signr = signo;
/*
* render child_pid harmless
* won't send SIGTERM to a random
* process in case of race condition
* and fast PID recycling
*/
child_pid = -1;
}
static void sig_atexit(void)
{
sigset_t set, oset;
/*
* avoid race condition with SIGCHLD handler
* in skip_signal() which is modifying child_pid
* goal is to avoid send SIGTERM to a random
* process
*/
sigemptyset(&set);
sigaddset(&set, SIGCHLD);
sigprocmask(SIG_BLOCK, &set, &oset);
if (child_pid != -1)
kill(child_pid, SIGTERM);
sigprocmask(SIG_SETMASK, &oset, NULL);
if (signr == -1)
return;
signal(signr, SIG_DFL);
kill(getpid(), signr);
}
perf tools: Use __maybe_used for unused variables perf defines both __used and __unused variables to use for marking unused variables. The variable __used is defined to __attribute__((__unused__)), which contradicts the kernel definition to __attribute__((__used__)) for new gcc versions. On Android, __used is also defined in system headers and this leads to warnings like: warning: '__used__' attribute ignored __unused is not defined in the kernel and is not a standard definition. If __unused is included everywhere instead of __used, this leads to conflicts with glibc headers, since glibc has a variables with this name in its headers. The best approach is to use __maybe_unused, the definition used in the kernel for __attribute__((unused)). In this way there is only one definition in perf sources (instead of 2 definitions that point to the same thing: __used and __unused) and it works on both Linux and Android. This patch simply replaces all instances of __used and __unused with __maybe_unused. Signed-off-by: Irina Tirdea <irina.tirdea@intel.com> Acked-by: Pekka Enberg <penberg@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Namhyung Kim <namhyung.kim@lge.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/1347315303-29906-7-git-send-email-irina.tirdea@intel.com [ committer note: fixed up conflict with a116e05 in builtin-sched.c ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-09-10 22:15:03 +00:00
static int stat__set_big_num(const struct option *opt __maybe_unused,
const char *s __maybe_unused, int unset)
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
{
big_num_opt = unset ? 0 : 1;
return 0;
}
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
static int enable_metric_only(const struct option *opt __maybe_unused,
const char *s __maybe_unused, int unset)
{
force_metric_only = true;
metric_only = !unset;
return 0;
}
perf stat: Support JSON metrics in perf stat Add generic support for standalone metrics specified in JSON files to perf stat. A metric is a formula that uses multiple events to compute a higher level result (e.g. IPC). Previously metrics were always tied to an event and automatically enabled with that event. But now change it that we can have standalone metrics. They are in the same JSON data structure as events, but don't have an event name. We also allow to organize the metrics in metric groups, which allows a short cut to select several related metrics at once. Add a new -M / --metrics option to perf stat that adds the metrics or metric groups specified. Add the core code to manage and parse the metric groups. They are collected from the JSON data structures into a separate rblist. When computing shadow values look for metrics in that list. Then they are computed using the existing saved values infrastructure in stat-shadow.c The actual JSON metrics are in a separate pull request. % perf stat -M Summary --metric-only -a sleep 1 Performance counter stats for 'system wide': Instructions CLKS CPU_Utilization GFLOPs SMT_2T_Utilization Kernel_Utilization 317614222.0 1392930775.0 0.0 0.0 0.2 0.1 1.001497549 seconds time elapsed % perf stat -M GFLOPs flops Performance counter stats for 'flops': 3,999,541,471 fp_comp_ops_exe.sse_scalar_single # 1.2 GFLOPs (66.65%) 14 fp_comp_ops_exe.sse_scalar_double (66.65%) 0 fp_comp_ops_exe.sse_packed_double (66.67%) 0 fp_comp_ops_exe.sse_packed_single (66.70%) 0 simd_fp_256.packed_double (66.70%) 0 simd_fp_256.packed_single (66.67%) 0 duration_time 3.238372845 seconds time elapsed v2: Add missing header file v3: Move find_map to pmu.c Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170831194036.30146-7-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:31 +00:00
static int parse_metric_groups(const struct option *opt,
const char *str,
int unset __maybe_unused)
{
return metricgroup__parse_groups(opt, str, &metric_events);
}
static const struct option stat_options[] = {
OPT_BOOLEAN('T', "transaction", &transaction_run,
"hardware transaction statistics"),
OPT_CALLBACK('e', "event", &evsel_list, "event",
"event selector. use 'perf list' to list available events",
parse_events_option),
OPT_CALLBACK(0, "filter", &evsel_list, "filter",
"event filter", parse_filter),
OPT_BOOLEAN('i', "no-inherit", &no_inherit,
"child tasks do not inherit counters"),
OPT_STRING('p', "pid", &target.pid, "pid",
"stat events on existing process id"),
OPT_STRING('t', "tid", &target.tid, "tid",
"stat events on existing thread id"),
OPT_BOOLEAN('a', "all-cpus", &target.system_wide,
"system-wide collection from all CPUs"),
OPT_BOOLEAN('g', "group", &group,
"put the counters into a counter group"),
OPT_BOOLEAN('c', "scale", &stat_config.scale, "scale/normalize counters"),
OPT_INCR('v', "verbose", &verbose,
"be more verbose (show counter open errors, etc)"),
OPT_INTEGER('r', "repeat", &run_count,
"repeat command and print average + stddev (max: 100, forever: 0)"),
OPT_BOOLEAN('n', "null", &null_run,
"null run - dont start any counters"),
OPT_INCR('d', "detailed", &detailed_run,
"detailed run - start a lot of events"),
OPT_BOOLEAN('S', "sync", &sync_run,
"call sync() before starting a run"),
OPT_CALLBACK_NOOPT('B', "big-num", NULL, NULL,
"print large numbers with thousands\' separators",
stat__set_big_num),
OPT_STRING('C', "cpu", &target.cpu_list, "cpu",
"list of cpus to monitor in system-wide"),
OPT_SET_UINT('A', "no-aggr", &stat_config.aggr_mode,
"disable CPU count aggregation", AGGR_NONE),
perf stat: Collapse identically named events The uncore PMU has a lot of duplicated PMUs for different subsystems. When expanding an uncore alias we usually end up with a large number of identically named aliases, which makes perf stat output difficult to read. Automatically sum them up in perf stat, unless --no-merge is specified. This can be default because only the uncores generally have duplicated aliases. Other PMUs have unique names. Before: % perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 694,976 Bytes unc_c_llc_lookup.any 706,304 Bytes unc_c_llc_lookup.any 956,608 Bytes unc_c_llc_lookup.any 782,720 Bytes unc_c_llc_lookup.any 605,696 Bytes unc_c_llc_lookup.any 442,816 Bytes unc_c_llc_lookup.any 659,328 Bytes unc_c_llc_lookup.any 509,312 Bytes unc_c_llc_lookup.any 263,936 Bytes unc_c_llc_lookup.any 592,448 Bytes unc_c_llc_lookup.any 672,448 Bytes unc_c_llc_lookup.any 608,640 Bytes unc_c_llc_lookup.any 641,024 Bytes unc_c_llc_lookup.any 856,896 Bytes unc_c_llc_lookup.any 808,832 Bytes unc_c_llc_lookup.any 684,864 Bytes unc_c_llc_lookup.any 710,464 Bytes unc_c_llc_lookup.any 538,304 Bytes unc_c_llc_lookup.any 1.002577660 seconds time elapsed After: % perf stat -a -e unc_c_llc_lookup.any sleep 1 Performance counter stats for 'system wide': 2,685,120 Bytes unc_c_llc_lookup.any 1.002648032 seconds time elapsed v2: Split collect_aliases. Rename alias flag. v3: Make sure unsupported/not counted is always printed. v4: Factor out callback change into separate patch. v5: Move check for bad results here Move merged check into collect_data Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
OPT_BOOLEAN(0, "no-merge", &no_merge, "Do not merge identical named events"),
OPT_STRING('x', "field-separator", &csv_sep, "separator",
"print counts with custom separator"),
OPT_CALLBACK('G', "cgroup", &evsel_list, "name",
"monitor event in cgroup name only", parse_cgroups),
OPT_STRING('o', "output", &output_name, "file", "output file name"),
OPT_BOOLEAN(0, "append", &append_file, "append to the output file"),
OPT_INTEGER(0, "log-fd", &output_fd,
"log output to fd, instead of stderr"),
OPT_STRING(0, "pre", &pre_cmd, "command",
"command to run prior to the measured command"),
OPT_STRING(0, "post", &post_cmd, "command",
"command to run after to the measured command"),
OPT_UINTEGER('I', "interval-print", &stat_config.interval,
"print counts at regular interval in ms (>= 10)"),
perf stat: Add support to print counts for fixed times Introduce a new option to print counts for fixed number of times and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat -I 1000 --interval-count 2 -e cycles -a # time counts unit events 1.002827089 93,884,870 cycles 2.004231506 56,573,446 cycles We can just print the counts for several times with this newly introduced option. The usage of it is a little like 'vmstat', and it should be used together with "-I" option. $ vmstat -n 1 2 procs ---------memory-------------- --swap- ----io-- -system-- ------cpu--- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 78270544 547484 51732076 0 0 0 20 1 1 1 0 99 0 0 0 0 0 78270512 547484 51732080 0 0 0 16 477 1555 0 0 100 0 0 Changes since v3: - merge interval_count check and times check to one line. - fix the wrong indent in stat.h - use stat_config.times instead of 'times' in cmd_stat function. Changes since v2: - none. Changes since v1: - change the name of the new option "times-print" to "interval-count". - keep the new option interval specifically. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-2-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:22 +00:00
OPT_INTEGER(0, "interval-count", &stat_config.times,
"print counts for fixed number of times"),
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
OPT_UINTEGER(0, "timeout", &stat_config.timeout,
"stop workload and print counts after a timeout period in ms (>= 10ms)"),
OPT_SET_UINT(0, "per-socket", &stat_config.aggr_mode,
"aggregate counts per processor socket", AGGR_SOCKET),
OPT_SET_UINT(0, "per-core", &stat_config.aggr_mode,
"aggregate counts per physical processor core", AGGR_CORE),
OPT_SET_UINT(0, "per-thread", &stat_config.aggr_mode,
"aggregate counts per thread", AGGR_THREAD),
OPT_UINTEGER('D', "delay", &initial_delay,
"ms to wait before starting measurement after program start"),
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
OPT_CALLBACK_NOOPT(0, "metric-only", &metric_only, NULL,
"Only print computed metrics. No raw values", enable_metric_only),
OPT_BOOLEAN(0, "topdown", &topdown_run,
"measure topdown level 1 statistics"),
OPT_BOOLEAN(0, "smi-cost", &smi_cost,
"measure SMI cost"),
perf stat: Support JSON metrics in perf stat Add generic support for standalone metrics specified in JSON files to perf stat. A metric is a formula that uses multiple events to compute a higher level result (e.g. IPC). Previously metrics were always tied to an event and automatically enabled with that event. But now change it that we can have standalone metrics. They are in the same JSON data structure as events, but don't have an event name. We also allow to organize the metrics in metric groups, which allows a short cut to select several related metrics at once. Add a new -M / --metrics option to perf stat that adds the metrics or metric groups specified. Add the core code to manage and parse the metric groups. They are collected from the JSON data structures into a separate rblist. When computing shadow values look for metrics in that list. Then they are computed using the existing saved values infrastructure in stat-shadow.c The actual JSON metrics are in a separate pull request. % perf stat -M Summary --metric-only -a sleep 1 Performance counter stats for 'system wide': Instructions CLKS CPU_Utilization GFLOPs SMT_2T_Utilization Kernel_Utilization 317614222.0 1392930775.0 0.0 0.0 0.2 0.1 1.001497549 seconds time elapsed % perf stat -M GFLOPs flops Performance counter stats for 'flops': 3,999,541,471 fp_comp_ops_exe.sse_scalar_single # 1.2 GFLOPs (66.65%) 14 fp_comp_ops_exe.sse_scalar_double (66.65%) 0 fp_comp_ops_exe.sse_packed_double (66.67%) 0 fp_comp_ops_exe.sse_packed_single (66.70%) 0 simd_fp_256.packed_double (66.70%) 0 simd_fp_256.packed_single (66.67%) 0 duration_time 3.238372845 seconds time elapsed v2: Add missing header file v3: Move find_map to pmu.c Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170831194036.30146-7-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:31 +00:00
OPT_CALLBACK('M', "metrics", &evsel_list, "metric/metric group list",
"monitor specified metrics or metric groups (separated by ,)",
parse_metric_groups),
OPT_END()
};
static int perf_stat__get_socket(struct cpu_map *map, int cpu)
{
return cpu_map__get_socket(map, cpu, NULL);
}
static int perf_stat__get_core(struct cpu_map *map, int cpu)
{
return cpu_map__get_core(map, cpu, NULL);
}
static int cpu_map__get_max(struct cpu_map *map)
{
int i, max = -1;
for (i = 0; i < map->nr; i++) {
if (map->map[i] > max)
max = map->map[i];
}
return max;
}
static struct cpu_map *cpus_aggr_map;
static int perf_stat__get_aggr(aggr_get_id_t get_id, struct cpu_map *map, int idx)
{
int cpu;
if (idx >= map->nr)
return -1;
cpu = map->map[idx];
if (cpus_aggr_map->map[cpu] == -1)
cpus_aggr_map->map[cpu] = get_id(map, idx);
return cpus_aggr_map->map[cpu];
}
static int perf_stat__get_socket_cached(struct cpu_map *map, int idx)
{
return perf_stat__get_aggr(perf_stat__get_socket, map, idx);
}
static int perf_stat__get_core_cached(struct cpu_map *map, int idx)
{
return perf_stat__get_aggr(perf_stat__get_core, map, idx);
}
static int perf_stat_init_aggr_mode(void)
{
int nr;
switch (stat_config.aggr_mode) {
case AGGR_SOCKET:
if (cpu_map__build_socket_map(evsel_list->cpus, &aggr_map)) {
perror("cannot build socket map");
return -1;
}
aggr_get_id = perf_stat__get_socket_cached;
break;
case AGGR_CORE:
if (cpu_map__build_core_map(evsel_list->cpus, &aggr_map)) {
perror("cannot build core map");
return -1;
}
aggr_get_id = perf_stat__get_core_cached;
break;
case AGGR_NONE:
case AGGR_GLOBAL:
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
case AGGR_THREAD:
case AGGR_UNSET:
default:
break;
}
/*
* The evsel_list->cpus is the base we operate on,
* taking the highest cpu number to be the size of
* the aggregation translate cpumap.
*/
nr = cpu_map__get_max(evsel_list->cpus);
cpus_aggr_map = cpu_map__empty_new(nr + 1);
return cpus_aggr_map ? 0 : -ENOMEM;
}
static void perf_stat__exit_aggr_mode(void)
{
cpu_map__put(aggr_map);
cpu_map__put(cpus_aggr_map);
aggr_map = NULL;
cpus_aggr_map = NULL;
}
static inline int perf_env__get_cpu(struct perf_env *env, struct cpu_map *map, int idx)
{
int cpu;
if (idx > map->nr)
return -1;
cpu = map->map[idx];
perf tools: Replace _SC_NPROCESSORS_CONF with max_present_cpu in cpu_topology_map There are 2 problems wrt. cpu_topology_map on systems with sparse CPUs: 1. offline/absent CPUs will have their socket_id and core_id set to -1 which triggers: "socket_id number is too big.You may need to upgrade the perf tool." 2. size of cpu_topology_map (perf_env.cpu[]) is allocated based on _SC_NPROCESSORS_CONF, but can be indexed with CPU ids going above. Users of perf_env.cpu[] are using CPU id as index. This can lead to read beyond what was allocated: ==19991== Invalid read of size 4 ==19991== at 0x490CEB: check_cpu_topology (topology.c:69) ==19991== by 0x490CEB: test_session_topology (topology.c:106) ... For example: _SC_NPROCESSORS_CONF == 16 available: 2 nodes (0-1) node 0 cpus: 0 6 8 10 16 22 24 26 node 0 size: 12004 MB node 0 free: 9470 MB node 1 cpus: 1 7 9 11 23 25 27 node 1 size: 12093 MB node 1 free: 9406 MB node distances: node 0 1 0: 10 20 1: 20 10 This patch changes HEADER_NRCPUS.nr_cpus_available from _SC_NPROCESSORS_CONF to max_present_cpu and updates any user of cpu_topology_map to iterate with nr_cpus_avail. As a consequence HEADER_CPU_TOPOLOGY core_id and socket_id lists get longer, but maintain compatibility with pre-patch state - index to cpu_topology_map is CPU id. perf test 36 -v 36: Session topology : --- start --- test child forked, pid 22211 templ file: /tmp/perf-test-gmdX5i CPU 0, core 0, socket 0 CPU 1, core 0, socket 1 CPU 6, core 10, socket 0 CPU 7, core 10, socket 1 CPU 8, core 1, socket 0 CPU 9, core 1, socket 1 CPU 10, core 9, socket 0 CPU 11, core 9, socket 1 CPU 16, core 0, socket 0 CPU 22, core 10, socket 0 CPU 23, core 10, socket 1 CPU 24, core 1, socket 0 CPU 25, core 1, socket 1 CPU 26, core 9, socket 0 CPU 27, core 9, socket 1 test child finished with 0 ---- end ---- Session topology: Ok Signed-off-by: Jan Stancek <jstancek@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/d7c05c6445fca74a8442c2c73cfffd349c52c44f.1487146877.git.jstancek@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-17 11:10:26 +00:00
if (cpu >= env->nr_cpus_avail)
return -1;
return cpu;
}
static int perf_env__get_socket(struct cpu_map *map, int idx, void *data)
{
struct perf_env *env = data;
int cpu = perf_env__get_cpu(env, map, idx);
return cpu == -1 ? -1 : env->cpu[cpu].socket_id;
}
static int perf_env__get_core(struct cpu_map *map, int idx, void *data)
{
struct perf_env *env = data;
int core = -1, cpu = perf_env__get_cpu(env, map, idx);
if (cpu != -1) {
int socket_id = env->cpu[cpu].socket_id;
/*
* Encode socket in upper 16 bits
* core_id is relative to socket, and
* we need a global id. So we combine
* socket + core id.
*/
core = (socket_id << 16) | (env->cpu[cpu].core_id & 0xffff);
}
return core;
}
static int perf_env__build_socket_map(struct perf_env *env, struct cpu_map *cpus,
struct cpu_map **sockp)
{
return cpu_map__build_map(cpus, sockp, perf_env__get_socket, env);
}
static int perf_env__build_core_map(struct perf_env *env, struct cpu_map *cpus,
struct cpu_map **corep)
{
return cpu_map__build_map(cpus, corep, perf_env__get_core, env);
}
static int perf_stat__get_socket_file(struct cpu_map *map, int idx)
{
return perf_env__get_socket(map, idx, &perf_stat.session->header.env);
}
static int perf_stat__get_core_file(struct cpu_map *map, int idx)
{
return perf_env__get_core(map, idx, &perf_stat.session->header.env);
}
static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
{
struct perf_env *env = &st->session->header.env;
switch (stat_config.aggr_mode) {
case AGGR_SOCKET:
if (perf_env__build_socket_map(env, evsel_list->cpus, &aggr_map)) {
perror("cannot build socket map");
return -1;
}
aggr_get_id = perf_stat__get_socket_file;
break;
case AGGR_CORE:
if (perf_env__build_core_map(env, evsel_list->cpus, &aggr_map)) {
perror("cannot build core map");
return -1;
}
aggr_get_id = perf_stat__get_core_file;
break;
case AGGR_NONE:
case AGGR_GLOBAL:
case AGGR_THREAD:
case AGGR_UNSET:
default:
break;
}
return 0;
}
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
static int topdown_filter_events(const char **attr, char **str, bool use_group)
{
int off = 0;
int i;
int len = 0;
char *s;
for (i = 0; attr[i]; i++) {
if (pmu_have_event("cpu", attr[i])) {
len += strlen(attr[i]) + 1;
attr[i - off] = attr[i];
} else
off++;
}
attr[i - off] = NULL;
*str = malloc(len + 1 + 2);
if (!*str)
return -1;
s = *str;
if (i - off == 0) {
*s = 0;
return 0;
}
if (use_group)
*s++ = '{';
for (i = 0; attr[i]; i++) {
strcpy(s, attr[i]);
s += strlen(s);
*s++ = ',';
}
if (use_group) {
s[-1] = '}';
*s = 0;
} else
s[-1] = 0;
return 0;
}
__weak bool arch_topdown_check_group(bool *warn)
{
*warn = false;
return false;
}
__weak void arch_topdown_group_warn(void)
{
}
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
/*
* Add default attributes, if there were no attributes specified or
* if -d/--detailed, -d -d or -d -d -d is used:
*/
static int add_default_attributes(void)
{
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
int err;
struct perf_event_attr default_attrs0[] = {
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_TASK_CLOCK },
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CONTEXT_SWITCHES },
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CPU_MIGRATIONS },
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_PAGE_FAULTS },
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_CPU_CYCLES },
};
struct perf_event_attr frontend_attrs[] = {
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_FRONTEND },
};
struct perf_event_attr backend_attrs[] = {
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
};
struct perf_event_attr default_attrs1[] = {
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_INSTRUCTIONS },
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_MISSES },
};
/*
* Detailed stats (-d), covering the L1 and last level data caches:
*/
struct perf_event_attr detailed_attrs[] = {
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_L1D << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_L1D << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_LL << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_LL << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
};
/*
* Very detailed stats (-d -d), covering the instruction cache and the TLB caches:
*/
struct perf_event_attr very_detailed_attrs[] = {
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_L1I << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_L1I << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_DTLB << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_DTLB << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_ITLB << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_ITLB << 0 |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
};
/*
* Very, very detailed stats (-d -d -d), adding prefetch events:
*/
struct perf_event_attr very_very_detailed_attrs[] = {
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_L1D << 0 |
(PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
{ .type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_L1D << 0 |
(PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
};
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
/* Set attrs if no event is selected and !null_run: */
if (null_run)
return 0;
if (transaction_run) {
if (pmu_have_event("cpu", "cycles-ct") &&
pmu_have_event("cpu", "el-start"))
err = parse_events(evsel_list, transaction_attrs, NULL);
else
err = parse_events(evsel_list, transaction_limited_attrs, NULL);
if (err) {
fprintf(stderr, "Cannot set up transaction events\n");
return -1;
}
return 0;
}
if (smi_cost) {
int smi;
if (sysfs__read_int(FREEZE_ON_SMI_PATH, &smi) < 0) {
fprintf(stderr, "freeze_on_smi is not supported.\n");
return -1;
}
if (!smi) {
if (sysfs__write_int(FREEZE_ON_SMI_PATH, 1) < 0) {
fprintf(stderr, "Failed to set freeze_on_smi.\n");
return -1;
}
smi_reset = true;
}
if (pmu_have_event("msr", "aperf") &&
pmu_have_event("msr", "smi")) {
if (!force_metric_only)
metric_only = true;
err = parse_events(evsel_list, smi_cost_attrs, NULL);
} else {
fprintf(stderr, "To measure SMI cost, it needs "
"msr/aperf/, msr/smi/ and cpu/cycles/ support\n");
return -1;
}
if (err) {
fprintf(stderr, "Cannot set up SMI cost events\n");
return -1;
}
return 0;
}
perf stat: Basic support for TopDown in perf stat Add basic plumbing for TopDown in perf stat TopDown is intended to replace the frontend cycles idle/ backend cycles idle metrics in standard perf stat output. These metrics are not reliable in many workloads, due to out of order effects. This implements a new --topdown mode in perf stat (similar to --transaction) that measures the pipe line bottlenecks using standardized formulas. The measurement can be all done with 5 counters (one fixed counter) The result are four metrics: FrontendBound, BackendBound, BadSpeculation, Retiring that describe the CPU pipeline behavior on a high level. The full top down methology has many hierarchical metrics. This implementation only supports level 1 which can be collected without multiplexing. A full implementation of top down on top of perf is available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools) The current version works on Intel Core CPUs starting with Sandy Bridge, and Atom CPUs starting with Silvermont. In principle the generic metrics should be also implementable on other out of order CPUs. TopDown level 1 uses a set of abstracted metrics which are generic to out of order CPU cores (although some CPUs may not implement all of them): topdown-total-slots Available slots in the pipeline topdown-slots-issued Slots issued into the pipeline topdown-slots-retired Slots successfully retired topdown-fetch-bubbles Pipeline gaps in the frontend topdown-recovery-bubbles Pipeline gaps during recovery from misspeculation These metrics then allow to compute four useful metrics: FrontendBound, BackendBound, Retiring, BadSpeculation. Add a new --topdown options to enable events. When --topdown is specified set up events for all topdown events supported by the kernel. Add topdown-* as a special case to the event parser, as is needed for all events containing -. The actual code to compute the metrics is in follow-on patches. v2: Use standard sysctl read function. v3: Move x86 specific code to arch/ v4: Enable --metric-only implicitly for topdown. v5: Add --single-thread option to not force per core mode v6: Fix output order of topdown metrics v7: Allow combining with -d v8: Remove --single-thread again v9: Rename functions, adding arch_ and topdown_. v10: Expand man page and describe TopDown better Paste intro into commit description. Print error when malloc fails. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
if (topdown_run) {
char *str = NULL;
bool warn = false;
if (stat_config.aggr_mode != AGGR_GLOBAL &&
stat_config.aggr_mode != AGGR_CORE) {
pr_err("top down event configuration requires --per-core mode\n");
return -1;
}
stat_config.aggr_mode = AGGR_CORE;
if (nr_cgroups || !target__has_cpu(&target)) {
pr_err("top down event configuration requires system-wide mode (-a)\n");
return -1;
}
if (!force_metric_only)
metric_only = true;
if (topdown_filter_events(topdown_attrs, &str,
arch_topdown_check_group(&warn)) < 0) {
pr_err("Out of memory\n");
return -1;
}
if (topdown_attrs[0] && str) {
if (warn)
arch_topdown_group_warn();
err = parse_events(evsel_list, str, NULL);
if (err) {
fprintf(stderr,
"Cannot set up top down events %s: %d\n",
str, err);
free(str);
return -1;
}
} else {
fprintf(stderr, "System does not support topdown\n");
return -1;
}
free(str);
}
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
if (!evsel_list->nr_entries) {
perf stat: Use cpu-clock event for cpu targets Currently 'perf stat' always counts task-clock event by default. But it's somewhat confusing for system-wide targets (especially with 'sleep N' as the 'sleep' task just sleeps and doesn't use cputime). Changing to cpu-clock event instead for that case makes more sense IMHO. Before: # perf stat -a sleep 0.1 Performance counter stats for 'system wide': 403.038603 task-clock (msec) # 4.001 CPUs utilized 150 context-switches # 0.372 K/sec 7 cpu-migrations # 0.017 K/sec 71 page-faults # 0.176 K/sec 23,705,169 cycles # 0.059 GHz 15,888,166 instructions # 0.67 insn per cycle 3,326,078 branches # 8.253 M/sec 87,643 branch-misses # 2.64% of all branches 0.100737009 seconds time elapsed # After: # perf stat -a sleep 0.1 Performance counter stats for 'system wide': 404.271182 cpu-clock (msec) # 4.000 CPUs utilized 143 context-switches # 0.354 K/sec 13 cpu-migrations # 0.032 K/sec 73 page-faults # 0.181 K/sec 22,119,220 cycles # 0.055 GHz 13,622,065 instructions # 0.62 insn per cycle 2,918,769 branches # 7.220 M/sec 85,033 branch-misses # 2.91% of all branches 0.101073089 seconds time elapsed # Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1463119263-5569-3-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-13 06:01:03 +00:00
if (target__has_cpu(&target))
default_attrs0[0].config = PERF_COUNT_SW_CPU_CLOCK;
if (perf_evlist__add_default_attrs(evsel_list, default_attrs0) < 0)
return -1;
if (pmu_have_event("cpu", "stalled-cycles-frontend")) {
if (perf_evlist__add_default_attrs(evsel_list,
frontend_attrs) < 0)
return -1;
}
if (pmu_have_event("cpu", "stalled-cycles-backend")) {
if (perf_evlist__add_default_attrs(evsel_list,
backend_attrs) < 0)
return -1;
}
if (perf_evlist__add_default_attrs(evsel_list, default_attrs1) < 0)
return -1;
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
}
/* Detailed events get appended to the event list: */
if (detailed_run < 1)
return 0;
/* Append detailed run extra attributes: */
perf stat: Initialize default events wrt exclude_{guest,host} When no event is specified the tools use perf_evlist__add_default(), that will call event_attr_init to initialize the KVM exclusion bits. When the change was made to the tools so that by default guest samples would be excluded, the changes were made just to the parsing routines and to perf_evlist__add_default(), not to perf_evlist__add_attrs, that is used so far just by perf stat to add multiple events, according to the level of detail specified. Recently the tools were changed to reconstruct the event name from all the details in perf_event_attr, not just from .type and .config, but taking into account all the feature bits (.exclude_{guest,host,user,kernel,etc}, .precise_ip, etc). That is when we noticed that the default for perf stat wasn't the one for the rest of the tools, i.e. the .exclude_guest bit wasn't being set. I.e. the default, that doesn't call event_attr_init was showing the :HG modifier: $ perf stat usleep 1 Performance counter stats for 'usleep 1': 0.942119 task-clock # 0.454 CPUs utilized 1 context-switches # 0.001 M/sec 0 CPU-migrations # 0.000 K/sec 126 page-faults # 0.134 M/sec 693,193 cycles:HG # 0.736 GHz [40.11%] 407,461 stalled-cycles-frontend:HG # 58.78% frontend cycles idle [72.29%] 365,403 stalled-cycles-backend:HG # 52.71% backend cycles idle 465,982 instructions:HG # 0.67 insns per cycle # 0.87 stalled cycles per insn 89,760 branches:HG # 95.275 M/sec 6,178 branch-misses:HG # 6.88% of all branches 0.002077228 seconds time elapsed While if one explicitely specifies the same events, which will make the parsing code to be called and thus event_attr_init is called: $ perf stat -e task-clock,context-switches,migrations,page-faults,cycles,stalled-cycles-frontend,stalled-cycles-backend,instructions,branches,branch-misses usleep 1 Performance counter stats for 'usleep 1': 1.040349 task-clock # 0.500 CPUs utilized 2 context-switches # 0.002 M/sec 0 CPU-migrations # 0.000 K/sec 127 page-faults # 0.122 M/sec 587,966 cycles # 0.565 GHz [13.18%] 459,167 stalled-cycles-frontend # 78.09% frontend cycles idle 390,249 stalled-cycles-backend # 66.37% backend cycles idle 504,006 instructions # 0.86 insns per cycle # 0.91 stalled cycles per insn 96,455 branches # 92.714 M/sec 6,522 branch-misses # 6.76% of all branches [96.12%] 0.002078681 seconds time elapsed Fix it by introducing a perf_evlist__add_default_attrs method that will call evlist_attr_init in all the perf_event_attr entries before adding the events. Reported-by: Ingo Molnar <mingo@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-4eysr236r0pgiyum9epwxw7s@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-30 16:53:54 +00:00
if (perf_evlist__add_default_attrs(evsel_list, detailed_attrs) < 0)
return -1;
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
if (detailed_run < 2)
return 0;
/* Append very detailed run extra attributes: */
perf stat: Initialize default events wrt exclude_{guest,host} When no event is specified the tools use perf_evlist__add_default(), that will call event_attr_init to initialize the KVM exclusion bits. When the change was made to the tools so that by default guest samples would be excluded, the changes were made just to the parsing routines and to perf_evlist__add_default(), not to perf_evlist__add_attrs, that is used so far just by perf stat to add multiple events, according to the level of detail specified. Recently the tools were changed to reconstruct the event name from all the details in perf_event_attr, not just from .type and .config, but taking into account all the feature bits (.exclude_{guest,host,user,kernel,etc}, .precise_ip, etc). That is when we noticed that the default for perf stat wasn't the one for the rest of the tools, i.e. the .exclude_guest bit wasn't being set. I.e. the default, that doesn't call event_attr_init was showing the :HG modifier: $ perf stat usleep 1 Performance counter stats for 'usleep 1': 0.942119 task-clock # 0.454 CPUs utilized 1 context-switches # 0.001 M/sec 0 CPU-migrations # 0.000 K/sec 126 page-faults # 0.134 M/sec 693,193 cycles:HG # 0.736 GHz [40.11%] 407,461 stalled-cycles-frontend:HG # 58.78% frontend cycles idle [72.29%] 365,403 stalled-cycles-backend:HG # 52.71% backend cycles idle 465,982 instructions:HG # 0.67 insns per cycle # 0.87 stalled cycles per insn 89,760 branches:HG # 95.275 M/sec 6,178 branch-misses:HG # 6.88% of all branches 0.002077228 seconds time elapsed While if one explicitely specifies the same events, which will make the parsing code to be called and thus event_attr_init is called: $ perf stat -e task-clock,context-switches,migrations,page-faults,cycles,stalled-cycles-frontend,stalled-cycles-backend,instructions,branches,branch-misses usleep 1 Performance counter stats for 'usleep 1': 1.040349 task-clock # 0.500 CPUs utilized 2 context-switches # 0.002 M/sec 0 CPU-migrations # 0.000 K/sec 127 page-faults # 0.122 M/sec 587,966 cycles # 0.565 GHz [13.18%] 459,167 stalled-cycles-frontend # 78.09% frontend cycles idle 390,249 stalled-cycles-backend # 66.37% backend cycles idle 504,006 instructions # 0.86 insns per cycle # 0.91 stalled cycles per insn 96,455 branches # 92.714 M/sec 6,522 branch-misses # 6.76% of all branches [96.12%] 0.002078681 seconds time elapsed Fix it by introducing a perf_evlist__add_default_attrs method that will call evlist_attr_init in all the perf_event_attr entries before adding the events. Reported-by: Ingo Molnar <mingo@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-4eysr236r0pgiyum9epwxw7s@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-30 16:53:54 +00:00
if (perf_evlist__add_default_attrs(evsel_list, very_detailed_attrs) < 0)
return -1;
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
if (detailed_run < 3)
return 0;
/* Append very, very detailed run extra attributes: */
perf stat: Initialize default events wrt exclude_{guest,host} When no event is specified the tools use perf_evlist__add_default(), that will call event_attr_init to initialize the KVM exclusion bits. When the change was made to the tools so that by default guest samples would be excluded, the changes were made just to the parsing routines and to perf_evlist__add_default(), not to perf_evlist__add_attrs, that is used so far just by perf stat to add multiple events, according to the level of detail specified. Recently the tools were changed to reconstruct the event name from all the details in perf_event_attr, not just from .type and .config, but taking into account all the feature bits (.exclude_{guest,host,user,kernel,etc}, .precise_ip, etc). That is when we noticed that the default for perf stat wasn't the one for the rest of the tools, i.e. the .exclude_guest bit wasn't being set. I.e. the default, that doesn't call event_attr_init was showing the :HG modifier: $ perf stat usleep 1 Performance counter stats for 'usleep 1': 0.942119 task-clock # 0.454 CPUs utilized 1 context-switches # 0.001 M/sec 0 CPU-migrations # 0.000 K/sec 126 page-faults # 0.134 M/sec 693,193 cycles:HG # 0.736 GHz [40.11%] 407,461 stalled-cycles-frontend:HG # 58.78% frontend cycles idle [72.29%] 365,403 stalled-cycles-backend:HG # 52.71% backend cycles idle 465,982 instructions:HG # 0.67 insns per cycle # 0.87 stalled cycles per insn 89,760 branches:HG # 95.275 M/sec 6,178 branch-misses:HG # 6.88% of all branches 0.002077228 seconds time elapsed While if one explicitely specifies the same events, which will make the parsing code to be called and thus event_attr_init is called: $ perf stat -e task-clock,context-switches,migrations,page-faults,cycles,stalled-cycles-frontend,stalled-cycles-backend,instructions,branches,branch-misses usleep 1 Performance counter stats for 'usleep 1': 1.040349 task-clock # 0.500 CPUs utilized 2 context-switches # 0.002 M/sec 0 CPU-migrations # 0.000 K/sec 127 page-faults # 0.122 M/sec 587,966 cycles # 0.565 GHz [13.18%] 459,167 stalled-cycles-frontend # 78.09% frontend cycles idle 390,249 stalled-cycles-backend # 66.37% backend cycles idle 504,006 instructions # 0.86 insns per cycle # 0.91 stalled cycles per insn 96,455 branches # 92.714 M/sec 6,522 branch-misses # 6.76% of all branches [96.12%] 0.002078681 seconds time elapsed Fix it by introducing a perf_evlist__add_default_attrs method that will call evlist_attr_init in all the perf_event_attr entries before adding the events. Reported-by: Ingo Molnar <mingo@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-4eysr236r0pgiyum9epwxw7s@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-30 16:53:54 +00:00
return perf_evlist__add_default_attrs(evsel_list, very_very_detailed_attrs);
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
}
static const char * const stat_record_usage[] = {
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
"perf stat record [<options>]",
NULL,
};
static void init_features(struct perf_session *session)
{
int feat;
for (feat = HEADER_FIRST_FEATURE; feat < HEADER_LAST_FEATURE; feat++)
perf_header__set_feat(&session->header, feat);
perf_header__clear_feat(&session->header, HEADER_BUILD_ID);
perf_header__clear_feat(&session->header, HEADER_TRACING_DATA);
perf_header__clear_feat(&session->header, HEADER_BRANCH_STACK);
perf_header__clear_feat(&session->header, HEADER_AUXTRACE);
}
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
static int __cmd_record(int argc, const char **argv)
{
struct perf_session *session;
struct perf_data *data = &perf_stat.data;
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
argc = parse_options(argc, argv, stat_options, stat_record_usage,
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
PARSE_OPT_STOP_AT_NON_OPTION);
if (output_name)
data->file.path = output_name;
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
if (run_count != 1 || forever) {
pr_err("Cannot use -r option with perf stat record.\n");
return -1;
}
session = perf_session__new(data, false, NULL);
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
if (session == NULL) {
pr_err("Perf session creation failed.\n");
return -1;
}
init_features(session);
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
session->evlist = evsel_list;
perf_stat.session = session;
perf_stat.record = true;
return argc;
}
perf stat report: Process stat and stat round events Adding processing of stat and stat round events. The stat data com in stat events, using generic function process_stat_round_event to store data under perf_evsel object. The stat-round events comes each interval or as last event in non interval mode. The function process_stat_round_event process stored data for each perf_evsel object and print it out. Committer note: After this patch: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ $ perf stat report Performance counter stats for '/home/acme/bin/perf stat record usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ Reported-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
static int process_stat_round_event(struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_session *session)
{
struct stat_round_event *stat_round = &event->stat_round;
perf stat report: Process stat and stat round events Adding processing of stat and stat round events. The stat data com in stat events, using generic function process_stat_round_event to store data under perf_evsel object. The stat-round events comes each interval or as last event in non interval mode. The function process_stat_round_event process stored data for each perf_evsel object and print it out. Committer note: After this patch: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ $ perf stat report Performance counter stats for '/home/acme/bin/perf stat record usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ Reported-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
struct perf_evsel *counter;
struct timespec tsh, *ts = NULL;
const char **argv = session->header.env.cmdline_argv;
int argc = session->header.env.nr_cmdline;
evlist__for_each_entry(evsel_list, counter)
perf stat report: Process stat and stat round events Adding processing of stat and stat round events. The stat data com in stat events, using generic function process_stat_round_event to store data under perf_evsel object. The stat-round events comes each interval or as last event in non interval mode. The function process_stat_round_event process stored data for each perf_evsel object and print it out. Committer note: After this patch: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ $ perf stat report Performance counter stats for '/home/acme/bin/perf stat record usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ Reported-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
perf_stat_process_counter(&stat_config, counter);
if (stat_round->type == PERF_STAT_ROUND_TYPE__FINAL)
update_stats(&walltime_nsecs_stats, stat_round->time);
perf stat report: Process stat and stat round events Adding processing of stat and stat round events. The stat data com in stat events, using generic function process_stat_round_event to store data under perf_evsel object. The stat-round events comes each interval or as last event in non interval mode. The function process_stat_round_event process stored data for each perf_evsel object and print it out. Committer note: After this patch: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ $ perf stat report Performance counter stats for '/home/acme/bin/perf stat record usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ Reported-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
if (stat_config.interval && stat_round->time) {
tsh.tv_sec = stat_round->time / NSEC_PER_SEC;
tsh.tv_nsec = stat_round->time % NSEC_PER_SEC;
perf stat report: Process stat and stat round events Adding processing of stat and stat round events. The stat data com in stat events, using generic function process_stat_round_event to store data under perf_evsel object. The stat-round events comes each interval or as last event in non interval mode. The function process_stat_round_event process stored data for each perf_evsel object and print it out. Committer note: After this patch: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ $ perf stat report Performance counter stats for '/home/acme/bin/perf stat record usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ Reported-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
ts = &tsh;
}
print_counters(ts, argc, argv);
return 0;
}
static
int process_stat_config_event(struct perf_tool *tool,
union perf_event *event,
struct perf_session *session __maybe_unused)
{
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
perf_event__read_stat_config(&stat_config, &event->stat_config);
if (cpu_map__empty(st->cpus)) {
if (st->aggr_mode != AGGR_UNSET)
pr_warning("warning: processing task data, aggregation mode not set\n");
return 0;
}
if (st->aggr_mode != AGGR_UNSET)
stat_config.aggr_mode = st->aggr_mode;
if (perf_stat.data.is_pipe)
perf_stat_init_aggr_mode();
else
perf_stat_init_aggr_mode_file(st);
return 0;
}
static int set_maps(struct perf_stat *st)
{
if (!st->cpus || !st->threads)
return 0;
if (WARN_ONCE(st->maps_allocated, "stats double allocation\n"))
return -EINVAL;
perf_evlist__set_maps(evsel_list, st->cpus, st->threads);
if (perf_evlist__alloc_stats(evsel_list, true))
return -ENOMEM;
st->maps_allocated = true;
return 0;
}
static
int process_thread_map_event(struct perf_tool *tool,
union perf_event *event,
struct perf_session *session __maybe_unused)
{
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
if (st->threads) {
pr_warning("Extra thread map event, ignoring.\n");
return 0;
}
st->threads = thread_map__new_event(&event->thread_map);
if (!st->threads)
return -ENOMEM;
return set_maps(st);
}
static
int process_cpu_map_event(struct perf_tool *tool,
union perf_event *event,
struct perf_session *session __maybe_unused)
{
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
struct cpu_map *cpus;
if (st->cpus) {
pr_warning("Extra cpu map event, ignoring.\n");
return 0;
}
cpus = cpu_map__new_data(&event->cpu_map.data);
if (!cpus)
return -ENOMEM;
st->cpus = cpus;
return set_maps(st);
}
static int runtime_stat_new(struct perf_stat_config *config, int nthreads)
{
int i;
config->stats = calloc(nthreads, sizeof(struct runtime_stat));
if (!config->stats)
return -1;
config->stats_num = nthreads;
for (i = 0; i < nthreads; i++)
runtime_stat__init(&config->stats[i]);
return 0;
}
static void runtime_stat_delete(struct perf_stat_config *config)
{
int i;
if (!config->stats)
return;
for (i = 0; i < config->stats_num; i++)
runtime_stat__exit(&config->stats[i]);
free(config->stats);
}
static const char * const stat_report_usage[] = {
"perf stat report [<options>]",
NULL,
};
static struct perf_stat perf_stat = {
.tool = {
.attr = perf_event__process_attr,
.event_update = perf_event__process_event_update,
.thread_map = process_thread_map_event,
.cpu_map = process_cpu_map_event,
.stat_config = process_stat_config_event,
perf stat report: Process stat and stat round events Adding processing of stat and stat round events. The stat data com in stat events, using generic function process_stat_round_event to store data under perf_evsel object. The stat-round events comes each interval or as last event in non interval mode. The function process_stat_round_event process stored data for each perf_evsel object and print it out. Committer note: After this patch: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ $ perf stat report Performance counter stats for '/home/acme/bin/perf stat record usleep 1': 0.498381 task-clock (msec) # 0.571 CPUs utilized 2 context-switches # 0.004 M/sec 0 cpu-migrations # 0.000 K/sec 149 page-faults # 0.299 M/sec 1,271,635 cycles # 2.552 GHz 928,712 stalled-cycles-frontend # 73.03% frontend cycles idle 663,286 stalled-cycles-backend # 52.16% backend cycles idle 792,614 instructions # 0.62 insns per cycle # 1.17 stalled cycles per insn 136,850 branches # 274.589 M/sec <not counted> branch-misses (0.00%) 0.000873419 seconds time elapsed $ Reported-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
.stat = perf_event__process_stat_event,
.stat_round = process_stat_round_event,
},
.aggr_mode = AGGR_UNSET,
};
static int __cmd_report(int argc, const char **argv)
{
struct perf_session *session;
const struct option options[] = {
OPT_STRING('i', "input", &input_name, "file", "input file name"),
OPT_SET_UINT(0, "per-socket", &perf_stat.aggr_mode,
"aggregate counts per processor socket", AGGR_SOCKET),
OPT_SET_UINT(0, "per-core", &perf_stat.aggr_mode,
"aggregate counts per physical processor core", AGGR_CORE),
OPT_SET_UINT('A', "no-aggr", &perf_stat.aggr_mode,
"disable CPU count aggregation", AGGR_NONE),
OPT_END()
};
struct stat st;
int ret;
argc = parse_options(argc, argv, options, stat_report_usage, 0);
if (!input_name || !strlen(input_name)) {
if (!fstat(STDIN_FILENO, &st) && S_ISFIFO(st.st_mode))
input_name = "-";
else
input_name = "perf.data";
}
perf_stat.data.file.path = input_name;
perf_stat.data.mode = PERF_DATA_MODE_READ;
session = perf_session__new(&perf_stat.data, false, &perf_stat.tool);
if (session == NULL)
return -1;
perf_stat.session = session;
stat_config.output = stderr;
evsel_list = session->evlist;
ret = perf_session__process_events(session);
if (ret)
return ret;
perf_session__delete(session);
return 0;
}
perf tools: Force uncore events to system wide monitoring Make system wide (-a) the default option if no target was specified and one of following conditions is met: - there's no workload specified (current behaviour) - there is workload specified but all requested events are system wide ones Mixed events core/uncore with workload: $ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1 Performance counter stats for 'sleep 1': <not supported> uncore_cbox_0/clockticks/ 980,489 cycles 1.000897406 seconds time elapsed Uncore event with workload: $ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1 Performance counter stats for 'system wide': 281,473,897,192,670 uncore_cbox_0/clockticks/ 1.000833784 seconds time elapsed Committer note: When testing I realized the default case for !root, i.e. no events passed via -e, was broke by v2 of this patch, reported and after a patch provided by Jiri it is back working: [acme@jouet linux]$ perf stat usleep 1 Performance counter stats for 'usleep 1': 0.401335 task-clock:u (msec) # 0.297 CPUs utilized 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 48 page-faults:u # 0.120 M/sec 458,146 cycles:u # 1.142 GHz 245,113 instructions:u # 0.54 insn per cycle 47,991 branches:u # 119.578 M/sec 4,022 branch-misses:u # 8.38% of all branches 0.001350029 seconds time elapsed [acme@jouet linux]$ Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
static void setup_system_wide(int forks)
{
/*
* Make system wide (-a) the default target if
* no target was specified and one of following
* conditions is met:
*
* - there's no workload specified
* - there is workload specified but all requested
* events are system wide events
*/
if (!target__none(&target))
return;
if (!forks)
target.system_wide = true;
else {
struct perf_evsel *counter;
evlist__for_each_entry(evsel_list, counter) {
if (!counter->system_wide)
return;
}
if (evsel_list->nr_entries)
target.system_wide = true;
}
}
int cmd_stat(int argc, const char **argv)
{
const char * const stat_usage[] = {
"perf stat [<options>] [<command>]",
NULL
};
int status = -EINVAL, run_idx;
const char *mode;
FILE *output = stderr;
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
unsigned int interval, timeout;
const char * const stat_subcommands[] = { "record", "report" };
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
perf stat: add perf stat -B to pretty print large numbers It is hard to read very large numbers so provide an option to perf stat to separate thousands using a separator. The patch leverages the locale support of stdio. You need to set your LC_NUMERIC appropriately, for instance LC_NUMERIC=en_US.UTF8. You need to pass -B to activate this feature. This way existing scripts parsing the output do not need to be changed. Here is an example. $ perf stat noploop 2 noploop for 2 seconds Performance counter stats for 'noploop 2': 1998.347031 task-clock-msecs # 0.998 CPUs 61 context-switches # 0.000 M/sec 0 CPU-migrations # 0.000 M/sec 118 page-faults # 0.000 M/sec 4,138,410,900 cycles # 2070.917 M/sec (scaled from 70.01%) 2,062,650,268 instructions # 0.498 IPC (scaled from 70.01%) 2,057,653,466 branches # 1029.678 M/sec (scaled from 70.01%) 40,267 branch-misses # 0.002 % (scaled from 30.04%) 2,055,961,348 cache-references # 1028.831 M/sec (scaled from 30.03%) 53,725 cache-misses # 0.027 M/sec (scaled from 30.02%) 2.001393933 seconds time elapsed $ perf stat -B noploop 2 noploop for 2 seconds Performance counter stats for 'noploop 2': 1998.297883 task-clock-msecs # 0.998 CPUs 59 context-switches # 0.000 M/sec 0 CPU-migrations # 0.000 M/sec 119 page-faults # 0.000 M/sec 4,131,380,160 cycles # 2067.450 M/sec (scaled from 70.01%) 2,059,096,507 instructions # 0.498 IPC (scaled from 70.01%) 2,054,681,303 branches # 1028.216 M/sec (scaled from 70.01%) 25,650 branch-misses # 0.001 % (scaled from 30.05%) 2,056,283,014 cache-references # 1029.017 M/sec (scaled from 30.03%) 47,097 cache-misses # 0.024 M/sec (scaled from 30.02%) 2.001391016 seconds time elapsed Cc: David S. Miller <davem@davemloft.net> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <4bf28fe8.914ed80a.01ca.fffff5f5@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-05-18 13:00:01 +00:00
setlocale(LC_ALL, "");
evsel_list = perf_evlist__new();
if (evsel_list == NULL)
return -ENOMEM;
perf stat: Bail out on unsupported event config modifiers 'perf stat' accepts some config terms but doesn't apply them. For example: # perf stat -e 'instructions/no-inherit/' -e 'instructions/inherit/' bash # ls # exit Performance counter stats for 'bash': 266258061 instructions/no-inherit/ 266258061 instructions/inherit/ 1.402183915 seconds time elapsed The result is confusing, because user may expect the first 'instructions' event exclude the 'ls' command. This patch forbid most of these config terms for 'perf stat'. Result: # ./perf stat -e 'instructions/no-inherit/' -e 'instructions/inherit/' bash event syntax error: 'instructions/no-inherit/' \___ 'no-inherit' is not usable in 'perf stat' ... We can add blocked config terms back when 'perf stat' really supports them. This patch also removes unavailable config term from error message: # ./perf stat -e 'instructions/badterm/' ls event syntax error: 'instructions/badterm/' \___ unknown term valid terms: config,config1,config2,name # ./perf stat -e 'cpu/badterm/' ls event syntax error: 'cpu/badterm/' \___ unknown term valid terms: pc,any,inv,edge,cmask,event,in_tx,ldlat,umask,in_tx_cp,offcore_rsp,config,config1,config2,name Signed-off-by: Wang Nan <wangnan0@huawei.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Cody P Schafer <dev@codyps.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jeremie Galarneau <jeremie.galarneau@efficios.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kirill Smelkov <kirr@nexedi.com> Cc: Li Zefan <lizefan@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1455882283-79592-11-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-19 11:43:58 +00:00
parse_events__shrink_config_terms();
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
argc = parse_options_subcommand(argc, argv, stat_options, stat_subcommands,
(const char **) stat_usage,
PARSE_OPT_STOP_AT_NON_OPTION);
perf stat: Output JSON MetricExpr metric Add generic infrastructure to perf stat to output ratios for "MetricExpr" entries in the event lists. Many events are more useful as ratios than in raw form, typically some count in relation to total ticks. Transfer the MetricExpr information from the alias to the evsel. We mark the events that need to be collected for MetricExpr, and also link the events using them with a pointer. The code is careful to always prefer the right event in the same group to minimize multiplexing errors. At the moment only a single relation is supported. Then add a rblist to the stat shadow code that remembers stats based on the cpu and context. Then finally update and retrieve and print these values similarly to the existing hardcoded perf metrics. We use the simple expression parser added earlier to evaluate the expression. Normally we just output the result without further commentary, but for --metric-only this would lead to empty columns. So for this case use the original event as description. There is no attempt to automatically add the MetricExpr event, if it is missing, however we suggest it to the user, because the user tool doesn't have enough information to reliably construct a group that is guaranteed to schedule. So we leave that to the user. % perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' 1.000147889 800,085,181 unc_p_clockticks 1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6 2.000448381 800,218,217 unc_p_clockticks 2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8 3.000639852 800,243,057 unc_p_clockticks 3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3 % perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only # time freq_max_os_cycles % 1.000127077 0.9 2.000301436 0.7 3.000456379 0.0 v2: Change from DivideBy to MetricExpr v3: Use expr__ prefix. Support more than one other event. v4: Update description v5: Only print warning message once for multiple PMUs. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:08 +00:00
perf_stat__collect_metric_expr(evsel_list);
perf_stat__init_shadow_stats();
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
if (csv_sep) {
csv_output = true;
if (!strcmp(csv_sep, "\\t"))
csv_sep = "\t";
} else
csv_sep = DEFAULT_SEPARATOR;
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
if (argc && !strncmp(argv[0], "rec", 3)) {
argc = __cmd_record(argc, argv);
if (argc < 0)
return -1;
} else if (argc && !strncmp(argv[0], "rep", 3))
return __cmd_report(argc, argv);
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
interval = stat_config.interval;
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
timeout = stat_config.timeout;
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
/*
* For record command the -o is already taken care of.
*/
if (!STAT_RECORD && output_name && strcmp(output_name, "-"))
output = NULL;
if (output_name && output_fd) {
fprintf(stderr, "cannot use both --output and --log-fd\n");
parse_options_usage(stat_usage, stat_options, "o", 1);
parse_options_usage(NULL, stat_options, "log-fd", 0);
goto out;
}
perf stat: Implement --metric-only mode Add a new mode to only print metrics. Sometimes we don't care about the raw values, just want the computed metrics. This allows more compact printing, so with -I each sample is only a single line. This also allows easier plotting and processing with other tools. The main target is with using --topdown, but it also works with -T and standard perf stat. A few metrics are not supported. To avoiding having to hardcode all the metrics in the code it uses a two pass approach: first compute dummy metrics and only print the headers in the print_metric callback. Then use the callback to print the actual values. There are some additional changes in the stat printout code to handle all metrics being on a single line. One issue is that the column code doesn't know in advance what events are not supported by the CPU, and it would be hard to find out as this could change based on dynamic conditions. That causes empty columns in some cases. The output can be fairly wide, often you may need more than 80 columns. Example: % perf stat -a -I 1000 --metric-only 1.001452803 frontend cycles idle insn per cycle stalled cycles per insn branch-misses of all branches 1.001452803 158.91% 0.66 2.39 2.92% 2.002192321 180.63% 0.76 2.08 2.96% 3.003088282 150.59% 0.62 2.57 2.84% 4.004369835 196.20% 0.98 1.62 3.79% 5.005227314 231.98% 0.84 1.90 4.71% v2: Lots of updates. v3: Use slightly narrower columns v4: Add comment Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/1457049458-28956-6-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-03-03 23:57:36 +00:00
if (metric_only && stat_config.aggr_mode == AGGR_THREAD) {
fprintf(stderr, "--metric-only is not supported with --per-thread\n");
goto out;
}
if (metric_only && run_count > 1) {
fprintf(stderr, "--metric-only is not supported with -r\n");
goto out;
}
if (output_fd < 0) {
fprintf(stderr, "argument to --log-fd must be a > 0\n");
parse_options_usage(stat_usage, stat_options, "log-fd", 0);
goto out;
}
if (!output) {
struct timespec tm;
mode = append_file ? "a" : "w";
output = fopen(output_name, mode);
if (!output) {
perror("failed to create output file");
return -1;
}
clock_gettime(CLOCK_REALTIME, &tm);
fprintf(output, "# started on %s\n", ctime(&tm.tv_sec));
} else if (output_fd > 0) {
mode = append_file ? "a" : "w";
output = fdopen(output_fd, mode);
if (!output) {
perror("Failed opening logfd");
return -errno;
}
}
stat_config.output = output;
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
/*
* let the spreadsheet do the pretty-printing
*/
if (csv_output) {
/* User explicitly passed -B? */
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
if (big_num_opt == 1) {
fprintf(stderr, "-B option not supported with -x\n");
parse_options_usage(stat_usage, stat_options, "B", 1);
parse_options_usage(NULL, stat_options, "x", 1);
goto out;
perf stat: Add csv-style output This patch adds an option (-x/--field-separator) to print counts using a CSV-style output. The user can pass a custom separator. This makes it very easy to import counts directly into your favorite spreadsheet without having to write scripts. Example: $ perf stat --field-separator=, -a -- sleep 1 4009.961740,task-clock-msecs 13,context-switches 2,CPU-migrations 189,page-faults 9596385684,cycles 3493659441,instructions 872897069,branches 41562,branch-misses 22424,cache-references 1289,cache-misses Works also in non-aggregated mode: $ perf stat -x , -a -A -- sleep 1 CPU0,1002.526168,task-clock-msecs CPU1,1002.528365,task-clock-msecs CPU2,1002.523360,task-clock-msecs CPU3,1002.519878,task-clock-msecs CPU0,1,context-switches CPU1,5,context-switches CPU2,5,context-switches CPU3,6,context-switches CPU0,0,CPU-migrations CPU1,1,CPU-migrations CPU2,0,CPU-migrations CPU3,1,CPU-migrations CPU0,2,page-faults CPU1,6,page-faults CPU2,9,page-faults CPU3,174,page-faults CPU0,2399439771,cycles CPU1,2380369063,cycles CPU2,2399142710,cycles CPU3,2373161192,cycles CPU0,872900618,instructions CPU1,873030960,instructions CPU2,872714525,instructions CPU3,874460580,instructions CPU0,221556839,branches CPU1,218134342,branches CPU2,218161730,branches CPU3,218284093,branches CPU0,18556,branch-misses CPU1,1449,branch-misses CPU2,3447,branch-misses CPU3,12714,branch-misses CPU0,8330,cache-references CPU1,313844,cache-references CPU2,47993728,cache-references CPU3,826481,cache-references CPU0,272,cache-misses CPU1,5360,cache-misses CPU2,1342193,cache-misses CPU3,13992,cache-misses This second version adds the ability to name a separator and uses field-separator as the long option to be consistent with perf report. Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be used with it, we need to notice if the user explicitely enabled or disabled -B, add code to disable big_num if the user didn't explicitely set --big_num when -x is used. Cc: David S. Miller <davem@davemloft.net> Cc: Frederik Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
} else /* Nope, so disable big number formatting */
big_num = false;
} else if (big_num_opt == 0) /* User passed --no-big-num */
big_num = false;
perf tools: Force uncore events to system wide monitoring Make system wide (-a) the default option if no target was specified and one of following conditions is met: - there's no workload specified (current behaviour) - there is workload specified but all requested events are system wide ones Mixed events core/uncore with workload: $ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1 Performance counter stats for 'sleep 1': <not supported> uncore_cbox_0/clockticks/ 980,489 cycles 1.000897406 seconds time elapsed Uncore event with workload: $ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1 Performance counter stats for 'system wide': 281,473,897,192,670 uncore_cbox_0/clockticks/ 1.000833784 seconds time elapsed Committer note: When testing I realized the default case for !root, i.e. no events passed via -e, was broke by v2 of this patch, reported and after a patch provided by Jiri it is back working: [acme@jouet linux]$ perf stat usleep 1 Performance counter stats for 'usleep 1': 0.401335 task-clock:u (msec) # 0.297 CPUs utilized 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 48 page-faults:u # 0.120 M/sec 458,146 cycles:u # 1.142 GHz 245,113 instructions:u # 0.54 insn per cycle 47,991 branches:u # 119.578 M/sec 4,022 branch-misses:u # 8.38% of all branches 0.001350029 seconds time elapsed [acme@jouet linux]$ Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
setup_system_wide(argc);
if (run_count < 0) {
pr_err("Run count must be a positive number\n");
parse_options_usage(stat_usage, stat_options, "r", 1);
goto out;
} else if (run_count == 0) {
forever = true;
run_count = 1;
}
if ((stat_config.aggr_mode == AGGR_THREAD) &&
!target__has_task(&target)) {
if (!target.system_wide || target.cpu_list) {
fprintf(stderr, "The --per-thread option is only "
"available when monitoring via -p -t -a "
"options or only --per-thread.\n");
parse_options_usage(NULL, stat_options, "p", 1);
parse_options_usage(NULL, stat_options, "t", 1);
goto out;
}
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
}
/*
* no_aggr, cgroup are for system-wide only
* --per-thread is aggregated per thread, we dont mix it with cpu mode
*/
if (((stat_config.aggr_mode != AGGR_GLOBAL &&
stat_config.aggr_mode != AGGR_THREAD) || nr_cgroups) &&
!target__has_cpu(&target)) {
fprintf(stderr, "both cgroup and no-aggregation "
"modes only available in system-wide mode\n");
parse_options_usage(stat_usage, stat_options, "G", 1);
parse_options_usage(NULL, stat_options, "A", 1);
parse_options_usage(NULL, stat_options, "a", 1);
goto out;
}
perf stat: Add -d -d and -d -d -d options to show more CPU events Print even more detailed statistics if requested via perf stat -d: -d: detailed events, L1 and LLC data cache -d -d: more detailed events, dTLB and iTLB events -d -d -d: very detailed events, adding prefetch events Full output looks like this now: Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% ) 49,068 context-switches # 0.029 M/sec ( +- 16.66% ) 8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% ) 17,397 page-faults # 0.010 M/sec ( +- 0.46% ) 2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%] 1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%] 743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%] 1,314,416,379 instructions # 0.56 insns per cycle # 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%] 272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%] 3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%] 449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%] 22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%] 6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%] 1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%] 411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%] 27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%] 464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%] 10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%] 1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%] 117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%] 4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%] 1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%] 0.195622057 seconds time elapsed ( +- 6.84% ) Also clean up the attribute construction code to be appending, and factor it out into add_default_attributes(). Tweak the coverage percentage printout a bit, so that it's easier to view it alongside the +- sttddev colum. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
if (add_default_attributes())
goto out;
target__validate(&target);
if ((stat_config.aggr_mode == AGGR_THREAD) && (target.system_wide))
target.per_thread = true;
if (perf_evlist__create_maps(evsel_list, &target) < 0) {
if (target__has_task(&target)) {
pr_err("Problems finding threads of monitor\n");
parse_options_usage(stat_usage, stat_options, "p", 1);
parse_options_usage(NULL, stat_options, "t", 1);
} else if (target__has_cpu(&target)) {
perror("failed to parse CPUs map");
parse_options_usage(stat_usage, stat_options, "C", 1);
parse_options_usage(NULL, stat_options, "a", 1);
}
goto out;
}
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
/*
* Initialize thread_map with comm names,
* so we could print it out on output.
*/
if (stat_config.aggr_mode == AGGR_THREAD) {
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
thread_map__read_comms(evsel_list->threads);
if (target.system_wide) {
if (runtime_stat_new(&stat_config,
thread_map__nr(evsel_list->threads))) {
goto out;
}
}
}
perf stat: Introduce --per-thread option Currently all the -p option PID arguments tasks values get aggregated and printed as single values. Adding --per-tasks option to print values per task. $ perf stat -e cycles,instructions --per-thread -p 30190,30242 ^C Performance counter stats for process id '30190,30242': cat-30190 0 cycles yes-30242 3,842,525,421 cycles cat-30190 0 instructions yes-30242 10,370,817,010 instructions 1.143155657 seconds time elapsed Also works under interval mode: $ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000 # time comm-pid counts unit events 1.000073435 cat-30190 89,058 cycles 1.000073435 yes-30242 3,360,786,902 cycles (100.00%) 1.000073435 cat-30190 14,066 instructions 1.000073435 yes-30242 9,069,937,462 instructions 2.000204830 cat-30190 0 cycles 2.000204830 yes-30242 3,351,667,626 cycles 2.000204830 cat-30190 0 instructions 2.000204830 yes-30242 9,045,796,885 instructions ^C 2.771286639 cat-30190 0 cycles 2.771286639 yes-30242 2,593,884,166 cycles 2.771286639 cat-30190 0 instructions 2.771286639 yes-30242 7,001,171,191 instructions It works only with -t and -p options, otherwise following error is printed: $ perf stat -e cycles --per-thread -I 1000 ls The --per-thread option is only available when monitoring via -p -t options. -p, --pid <pid> stat events on existing process id -t, --tid <tid> stat events on existing thread id Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
2013-01-29 11:47:44 +00:00
if (interval && interval < 100) {
perf stat: Reduce min --interval-print to 10ms The --interval-print parameter was limited to 100ms. However, for example, 10ms is required to do sophisticated bandwidth analysis using uncore events. The test shows that the overhead of the system-wide uncore monitoring with 10ms interval is only ~2%. So this patch reduces the minimal interval-print allowd to 10ms. But 10ms may not work well for all cases. For example, when the cpus/threads number is very large, for system-wide core event monitoring the overhead could be high. To handle this issue, a warning will be displayed when the interval-print is set between 10ms to 100ms. So users can make a decision according to their specific cases. # perf stat -e uncore_imc_1/cas_count_read/ -a --interval-print 10 -- sleep 1 print interval < 100ms. The overhead percentage could be high in some cases. Please proceed with caution. # time counts unit events 0.010200451 0.10 MiB uncore_imc_1/cas_count_read/ 0.020475117 0.02 MiB uncore_imc_1/cas_count_read/ 0.030692800 0.01 MiB uncore_imc_1/cas_count_read/ 0.040948161 0.02 MiB uncore_imc_1/cas_count_read/ 0.051159564 0.00 MiB uncore_imc_1/cas_count_read/ Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1443776674-42511-1-git-send-email-kan.liang@intel.com [ Added warning about overhead when using sub 100ms intervals to the man page ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-10-02 09:04:34 +00:00
if (interval < 10) {
pr_err("print interval must be >= 10ms\n");
parse_options_usage(stat_usage, stat_options, "I", 1);
perf stat: Reduce min --interval-print to 10ms The --interval-print parameter was limited to 100ms. However, for example, 10ms is required to do sophisticated bandwidth analysis using uncore events. The test shows that the overhead of the system-wide uncore monitoring with 10ms interval is only ~2%. So this patch reduces the minimal interval-print allowd to 10ms. But 10ms may not work well for all cases. For example, when the cpus/threads number is very large, for system-wide core event monitoring the overhead could be high. To handle this issue, a warning will be displayed when the interval-print is set between 10ms to 100ms. So users can make a decision according to their specific cases. # perf stat -e uncore_imc_1/cas_count_read/ -a --interval-print 10 -- sleep 1 print interval < 100ms. The overhead percentage could be high in some cases. Please proceed with caution. # time counts unit events 0.010200451 0.10 MiB uncore_imc_1/cas_count_read/ 0.020475117 0.02 MiB uncore_imc_1/cas_count_read/ 0.030692800 0.01 MiB uncore_imc_1/cas_count_read/ 0.040948161 0.02 MiB uncore_imc_1/cas_count_read/ 0.051159564 0.00 MiB uncore_imc_1/cas_count_read/ Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1443776674-42511-1-git-send-email-kan.liang@intel.com [ Added warning about overhead when using sub 100ms intervals to the man page ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-10-02 09:04:34 +00:00
goto out;
} else
pr_warning("print interval < 100ms. "
"The overhead percentage could be high in some cases. "
"Please proceed with caution.\n");
2013-01-29 11:47:44 +00:00
}
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
perf stat: Add support to print counts for fixed times Introduce a new option to print counts for fixed number of times and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat -I 1000 --interval-count 2 -e cycles -a # time counts unit events 1.002827089 93,884,870 cycles 2.004231506 56,573,446 cycles We can just print the counts for several times with this newly introduced option. The usage of it is a little like 'vmstat', and it should be used together with "-I" option. $ vmstat -n 1 2 procs ---------memory-------------- --swap- ----io-- -system-- ------cpu--- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 78270544 547484 51732076 0 0 0 20 1 1 1 0 99 0 0 0 0 0 78270512 547484 51732080 0 0 0 16 477 1555 0 0 100 0 0 Changes since v3: - merge interval_count check and times check to one line. - fix the wrong indent in stat.h - use stat_config.times instead of 'times' in cmd_stat function. Changes since v2: - none. Changes since v1: - change the name of the new option "times-print" to "interval-count". - keep the new option interval specifically. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-2-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:22 +00:00
if (stat_config.times && interval)
interval_count = true;
else if (stat_config.times && !interval) {
pr_err("interval-count option should be used together with "
"interval-print.\n");
parse_options_usage(stat_usage, stat_options, "interval-count", 0);
parse_options_usage(stat_usage, stat_options, "I", 1);
goto out;
}
perf stat: Add support to print counts after a period of time Introduce a new option to print counts after N milliseconds and update 'perf stat' documentation accordingly. Show below is the output of the new option for perf stat. $ perf stat --time 2000 -e cycles -a Performance counter stats for 'system wide': 157,260,423 cycles 2.003060766 seconds time elapsed We can print the count deltas after N milliseconds with this new introduced option. This option is not supported with "-I" option. In addition, according to Kangliang's patch(19afd10410957), the monitoring overhead for system-wide core event could be very high if the interval-print parameter was below 100ms, and the limitation value is 10ms. So the same warning will be displayed when the time is set between 10ms to 100ms, and the minimal time is limited to 10ms. Users can make a decision according to their spcific cases. Committer notes: This actually stops the workload after the specified time, then prints the counts. So I renamed the option to --timeout and updated the documentation to state that it will not just print the counts after the specified time, but will really stop the 'perf stat' session and print the counts. The rename from 'time' to 'timeout' also fixes the build in systems where 'time' is used by glibc and can't be used as a name of a variable, such as centos:5 and centos:6. Changes since v3: - none. Changes since v2: - modify the time check in __run_perf_stat func to keep some consistency with the workload case. - add the warning when the time is set between 10ms to 100ms. - add the pr_err when the time is set below 10ms. Changes since v1: - none. Signed-off-by: yuzhoujian <yuzhoujian@didichuxing.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1517217923-8302-3-git-send-email-ufo19890607@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-29 09:25:23 +00:00
if (timeout && timeout < 100) {
if (timeout < 10) {
pr_err("timeout must be >= 10ms.\n");
parse_options_usage(stat_usage, stat_options, "timeout", 0);
goto out;
} else
pr_warning("timeout < 100ms. "
"The overhead percentage could be high in some cases. "
"Please proceed with caution.\n");
}
if (timeout && interval) {
pr_err("timeout option is not supported with interval-print.\n");
parse_options_usage(stat_usage, stat_options, "timeout", 0);
parse_options_usage(stat_usage, stat_options, "I", 1);
goto out;
}
if (perf_evlist__alloc_stats(evsel_list, interval))
goto out;
if (perf_stat_init_aggr_mode())
goto out;
/*
* We dont want to block the signals - that would cause
* child tasks to inherit that and Ctrl-C would not work.
* What we want is for Ctrl-C to work in the exec()-ed
* task, but being ignored by perf stat itself:
*/
atexit(sig_atexit);
if (!forever)
signal(SIGINT, skip_signal);
2013-01-29 11:47:44 +00:00
signal(SIGCHLD, skip_signal);
signal(SIGALRM, skip_signal);
signal(SIGABRT, skip_signal);
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
status = 0;
for (run_idx = 0; forever || run_idx < run_count; run_idx++) {
if (run_count != 1 && verbose > 0)
fprintf(output, "[ perf stat: executing run #%d ... ]\n",
run_idx + 1);
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
status = run_perf_stat(argc, argv);
if (forever && status != -1) {
print_counters(NULL, argc, argv);
perf_stat__reset_stats();
}
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
}
if (!forever && status != -1 && !interval)
print_counters(NULL, argc, argv);
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
if (STAT_RECORD) {
/*
* We synthesize the kernel mmap record just so that older tools
* don't emit warnings about not being able to resolve symbols
* due to /proc/sys/kernel/kptr_restrict settings and instear provide
* a saner message about no samples being in the perf.data file.
*
* This also serves to suppress a warning about f_header.data.size == 0
* in header.c at the moment 'perf stat record' gets introduced, which
* is not really needed once we start adding the stat specific PERF_RECORD_
* records, but the need to suppress the kptr_restrict messages in older
* tools remain -acme
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
*/
int fd = perf_data__fd(&perf_stat.data);
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
int err = perf_event__synthesize_kernel_mmap((void *)&perf_stat,
process_synthesized_event,
&perf_stat.session->machines.host);
if (err) {
pr_warning("Couldn't synthesize the kernel mmap record, harmless, "
"older tools may produce warnings about this file\n.");
}
if (!interval) {
if (WRITE_STAT_ROUND_EVENT(walltime_nsecs_stats.max, FINAL))
pr_err("failed to write stat round event\n");
}
if (!perf_stat.data.is_pipe) {
perf_stat.session->header.data_size += perf_stat.bytes_written;
perf_session__write_header(perf_stat.session, evsel_list, fd, true);
}
perf stat record: Add record command Add 'perf stat record' command support. It creates simple (header only) perf.data file ATM. The record command could be specified anywhere among stat options. All stat command options are valid for stat record command with '-o' option exception. If specified for record command it denotes the perf data file name. Committer note: Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless while avoiding that older tools show confusing messages, for instance, with sample_type = 0, we get: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.630237 task-clock (msec) # 0.528 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 52 page-faults # 0.083 M/sec 978,312 cycles # 1.552 GHz 671,931 stalled-cycles-frontend # 68.68% frontend cycles idle <not supported> stalled-cycles-backend 646,379 instructions # 0.66 insns per cycle # 1.04 stalled cycles per insn 131,046 branches # 207.931 M/sec 7,073 branch-misses # 5.40% of all branches 0.001193240 seconds time elapsed $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? non matching sample_type $ While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf stat record usleep' we get: $ oldperf evlist WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ Which at least shows the names of the events in the perf.data file. Additionally, such files, when passed to 'perf report' will produce: $ oldperf report --stdio WARNING: The perf.data file's data size field is 0 which is unexpected. Was the 'perf record' command properly terminated? Warning: Kernel address maps (/proc/{kallsyms,modules}) were restricted. Check /proc/sys/kernel/kptr_restrict before running 'perf record'. As no suitable kallsyms nor vmlinux was found, kernel samples can't be resolved. Samples in kernel modules can't be resolved as well. Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # $ Which is confusing and can be solved by just adding the kernel mmap record, which will also remove that warning about the data size field being equal to zero, after generating the mmap record: $ perf stat record usleep 1 Performance counter stats for 'usleep 1': 0.600796 task-clock (msec) # 0.478 CPUs utilized 1 context-switches # 0.002 M/sec 0 cpu-migrations # 0.000 K/sec 54 page-faults # 0.090 M/sec 886,844 cycles # 1.476 GHz 582,169 stalled-cycles-frontend # 65.65% frontend cycles idle <not supported> stalled-cycles-backend 638,344 instructions # 0.72 insns per cycle # 0.91 stalled cycles per insn 130,204 branches # 216.719 M/sec 7,500 branch-misses # 5.76% of all branches 0.001255897 seconds time elapsed $ oldperf evlist task-clock context-switches cpu-migrations page-faults cycles stalled-cycles-frontend stalled-cycles-backend instructions branches branch-misses $ oldperf report --stdio Error: The perf.data file has no samples! # To display the perf.data header info, please use --header/--header-only options. # [acme@zoo linux]$ No warnings, sensible output about what are the events in the perf.data file and also a "file has no samples" message, which indeed it doesn't. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Kan Liang <kan.liang@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
perf_session__delete(perf_stat.session);
}
perf_stat__exit_aggr_mode();
perf_evlist__free_stats(evsel_list);
out:
if (smi_cost && smi_reset)
sysfs__write_int(FREEZE_ON_SMI_PATH, 0);
perf_evlist__delete(evsel_list);
runtime_stat_delete(&stat_config);
perf stat: Add feature to run and measure a command multiple times Add the --repeat <n> feature to perf stat, which repeats a given command up to a 100 times, collects the stats and calculates an average and a stddev. For example, the following oneliner 'perf stat' command runs hackbench 5 times and prints a tabulated result of all metrics, with averages and noise levels (in percentage) printed: aldebaran:~/linux/linux/tools/perf> ./perf stat --repeat 5 ~/hackbench 10 Time: 0.117 Time: 0.108 Time: 0.089 Time: 0.088 Time: 0.100 Performance counter stats for '/home/mingo/hackbench 10' (5 runs): 1243.989586 task-clock-msecs # 10.460 CPUs ( +- 4.720% ) 47706 context-switches # 0.038 M/sec ( +- 19.706% ) 387 CPU-migrations # 0.000 M/sec ( +- 3.608% ) 17793 page-faults # 0.014 M/sec ( +- 0.354% ) 3770941606 cycles # 3031.329 M/sec ( +- 4.621% ) 1566372416 instructions # 0.415 IPC ( +- 2.703% ) 16783421 cache-references # 13.492 M/sec ( +- 5.202% ) 7128590 cache-misses # 5.730 M/sec ( +- 7.420% ) 0.118924455 seconds time elapsed. The goal of this feature is to allow the reliance on these accurate statistics and to know how many times a command has to be repeated for the noise to go down to an acceptable level. (The -v option can be used to see a line printed out as each run progresses.) Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-13 12:57:28 +00:00
return status;
}