mirror of
https://github.com/torvalds/linux.git
synced 2024-11-10 22:21:40 +00:00
Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf update from Ingo Molnar: "Lots of changes in this cycle as well, with hundreds of commits from over 30 contributors. Most of the activity was on the tooling side. Higher level changes: - New 'perf kvm' analysis tool, from Xiao Guangrong. - New 'perf trace' system-wide tracing tool - uprobes fixes + cleanups from Oleg Nesterov. - Lots of patches to make perf build on Android out of box, from Irina Tirdea - Extend ftrace function tracing utility to be more dynamic for its users. It allows for data passing to the callback functions, as well as reading regs as if a breakpoint were to trigger at function entry. The main goal of this patch series was to allow kprobes to use ftrace as an optimized probe point when a probe is placed on an ftrace nop. With lots of help from Masami Hiramatsu, and going through lots of iterations, we finally came up with a good solution. - Add cpumask for uncore pmu, use it in 'stat', from Yan, Zheng. - Various tracing updates from Steve Rostedt - Clean up and improve 'perf sched' performance by elliminating lots of needless calls to libtraceevent. - Event group parsing support, from Jiri Olsa - UI/gtk refactorings and improvements from Namhyung Kim - Add support for non-tracepoint events in perf script python, from Feng Tang - Add --symbols to 'script', similar to the one in 'report', from Feng Tang. Infrastructure enhancements and fixes: - Convert the trace builtins to use the growing evsel/evlist tracepoint infrastructure, removing several open coded constructs like switch like series of strcmp to dispatch events, etc. Basically what had already been showcased in 'perf sched'. - Add evsel constructor for tracepoints, that uses libtraceevent just to parse the /format events file, use it in a new 'perf test' to make sure the libtraceevent format parsing regressions can be more readily caught. - Some strange errors were happening in some builds, but not on the next, reported by several people, problem was some parser related files, generated during the build, didn't had proper make deps, fix from Eric Sandeen. - Introduce struct and cache information about the environment where a perf.data file was captured, from Namhyung Kim. - Fix handling of unresolved samples when --symbols is used in 'report', from Feng Tang. - Add union member access support to 'probe', from Hyeoncheol Lee. - Fixups to die() removal, from Namhyung Kim. - Render fixes for the TUI, from Namhyung Kim. - Don't enable annotation in non symbolic view, from Namhyung Kim. - Fix pipe mode in 'report', from Namhyung Kim. - Move related stats code from stat to util/, will be used by the 'stat' kvm tool, from Xiao Guangrong. - Remove die()/exit() calls from several tools. - Resolve vdso callchains, from Jiri Olsa - Don't pass const char pointers to basename, so that we can unconditionally use libgen.h and thus avoid ifdef BIONIC lines, from David Ahern - Refactor hist formatting so that it can be reused with the GTK browser, From Namhyung Kim - Fix build for another rbtree.c change, from Adrian Hunter. - Make 'perf diff' command work with evsel hists, from Jiri Olsa. - Use the only field_sep var that is set up: symbol_conf.field_sep, fix from Jiri Olsa. - .gitignore compiled python binaries, from Namhyung Kim. - Get rid of die() in more libtraceevent places, from Namhyung Kim. - Rename libtraceevent 'private' struct member to 'priv' so that it works in C++, from Steven Rostedt - Remove lots of exit()/die() calls from tools so that the main perf exit routine can take place, from David Ahern - Fix x86 build on x86-64, from David Ahern. - {int,str,rb}list fixes from Suzuki K Poulose - perf.data header fixes from Namhyung Kim - Allow user to indicate objdump path, needed in cross environments, from Maciek Borzecki - Fix hardware cache event name generation, fix from Jiri Olsa - Add round trip test for sw, hw and cache event names, catching the problem Jiri fixed, after Jiri's patch, the test passes successfully. - Clean target should do clean for lib/traceevent too, fix from David Ahern - Check the right variable for allocation failure, fix from Namhyung Kim - Set up evsel->tp_format regardless of evsel->name being set already, fix from Namhyung Kim - Oprofile fixes from Robert Richter. - Remove perf_event_attr needless version inflation, from Jiri Olsa - Introduce libtraceevent strerror like error reporting facility, from Namhyung Kim - Add pmu mappings to perf.data header and use event names from cmd line, from Robert Richter - Fix include order for bison/flex-generated C files, from Ben Hutchings - Build fixes and documentation corrections from David Ahern - Assorted cleanups from Robert Richter - Let O= makes handle relative paths, from Steven Rostedt - perf script python fixes, from Feng Tang. - Initial bash completion support, from Frederic Weisbecker - Allow building without libelf, from Namhyung Kim. - Support DWARF CFI based unwind to have callchains when %bp based unwinding is not possible, from Jiri Olsa. - Symbol resolution fixes, while fixing support PPC64 files with an .opt ELF section was the end goal, several fixes for code that handles all architectures and cleanups are included, from Cody Schafer. - Assorted fixes for Documentation and build in 32 bit, from Robert Richter - Cache the libtraceevent event_format associated to each evsel early, so that we avoid relookups, i.e. calling pevent_find_event repeatedly when processing tracepoint events. [ This is to reduce the surface contact with libtraceevents and make clear what is that the perf tools needs from that lib: so far parsing the common and per event fields. ] - Don't stop the build if the audit libraries are not installed, fix from Namhyung Kim. - Fix bfd.h/libbfd detection with recent binutils, from Markus Trippelsdorf. - Improve warning message when libunwind devel packages not present, from Jiri Olsa" * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (282 commits) perf trace: Add aliases for some syscalls perf probe: Print an enum type variable in "enum variable-name" format when showing accessible variables perf tools: Check libaudit availability for perf-trace builtin perf hists: Add missing period_* fields when collapsing a hist entry perf trace: New tool perf evsel: Export the event_format constructor perf evsel: Introduce rawptr() method perf tools: Use perf_evsel__newtp in the event parser perf evsel: The tracepoint constructor should store sys:name perf evlist: Introduce set_filter() method perf evlist: Renane set_filters method to apply_filters perf test: Add test to check we correctly parse and match syscall open parms perf evsel: Handle endianity in intval method perf evsel: Know if byte swap is needed perf tools: Allow handling a NULL cpu_map as meaning "all cpus" perf evsel: Improve tracepoint constructor setup tools lib traceevent: Fix error path on pevent_parse_event perf test: Fix build failure trace: Move trace event enable from fs_initcall to core_initcall tracing: Add an option for disabling markers ...
This commit is contained in:
commit
7e92daaefa
6
Makefile
6
Makefile
@ -609,7 +609,11 @@ KBUILD_CFLAGS += $(call cc-option, -femit-struct-debug-baseonly)
|
||||
endif
|
||||
|
||||
ifdef CONFIG_FUNCTION_TRACER
|
||||
KBUILD_CFLAGS += -pg
|
||||
ifdef CONFIG_HAVE_FENTRY
|
||||
CC_USING_FENTRY := $(call cc-option, -mfentry -DCC_USING_FENTRY)
|
||||
endif
|
||||
KBUILD_CFLAGS += -pg $(CC_USING_FENTRY)
|
||||
KBUILD_AFLAGS += $(CC_USING_FENTRY)
|
||||
ifdef CONFIG_DYNAMIC_FTRACE
|
||||
ifdef CONFIG_HAVE_C_RECORDMCOUNT
|
||||
BUILD_C_RECORDMCOUNT := y
|
||||
|
13
arch/Kconfig
13
arch/Kconfig
@ -222,6 +222,19 @@ config HAVE_PERF_EVENTS_NMI
|
||||
subsystem. Also has support for calculating CPU cycle events
|
||||
to determine how many clock cycles in a given period.
|
||||
|
||||
config HAVE_PERF_REGS
|
||||
bool
|
||||
help
|
||||
Support selective register dumps for perf events. This includes
|
||||
bit-mapping of each registers and a unique architecture id.
|
||||
|
||||
config HAVE_PERF_USER_STACK_DUMP
|
||||
bool
|
||||
help
|
||||
Support user stack dumps for perf event samples. This needs
|
||||
access to the user stack pointer which is not unified across
|
||||
architectures.
|
||||
|
||||
config HAVE_ARCH_JUMP_LABEL
|
||||
bool
|
||||
|
||||
|
@ -36,6 +36,7 @@ config X86
|
||||
select HAVE_KRETPROBES
|
||||
select HAVE_OPTPROBES
|
||||
select HAVE_FTRACE_MCOUNT_RECORD
|
||||
select HAVE_FENTRY if X86_64
|
||||
select HAVE_C_RECORDMCOUNT
|
||||
select HAVE_DYNAMIC_FTRACE
|
||||
select HAVE_FUNCTION_TRACER
|
||||
@ -60,6 +61,8 @@ config X86
|
||||
select HAVE_MIXED_BREAKPOINTS_REGS
|
||||
select PERF_EVENTS
|
||||
select HAVE_PERF_EVENTS_NMI
|
||||
select HAVE_PERF_REGS
|
||||
select HAVE_PERF_USER_STACK_DUMP
|
||||
select ANON_INODES
|
||||
select HAVE_ALIGNED_STRUCT_PAGE if SLUB && !M386
|
||||
select HAVE_CMPXCHG_LOCAL if !M386
|
||||
|
@ -3,38 +3,54 @@
|
||||
|
||||
#ifdef __ASSEMBLY__
|
||||
|
||||
.macro MCOUNT_SAVE_FRAME
|
||||
/* taken from glibc */
|
||||
subq $0x38, %rsp
|
||||
movq %rax, (%rsp)
|
||||
movq %rcx, 8(%rsp)
|
||||
movq %rdx, 16(%rsp)
|
||||
movq %rsi, 24(%rsp)
|
||||
movq %rdi, 32(%rsp)
|
||||
movq %r8, 40(%rsp)
|
||||
movq %r9, 48(%rsp)
|
||||
/* skip is set if the stack was already partially adjusted */
|
||||
.macro MCOUNT_SAVE_FRAME skip=0
|
||||
/*
|
||||
* We add enough stack to save all regs.
|
||||
*/
|
||||
subq $(SS+8-\skip), %rsp
|
||||
movq %rax, RAX(%rsp)
|
||||
movq %rcx, RCX(%rsp)
|
||||
movq %rdx, RDX(%rsp)
|
||||
movq %rsi, RSI(%rsp)
|
||||
movq %rdi, RDI(%rsp)
|
||||
movq %r8, R8(%rsp)
|
||||
movq %r9, R9(%rsp)
|
||||
/* Move RIP to its proper location */
|
||||
movq SS+8(%rsp), %rdx
|
||||
movq %rdx, RIP(%rsp)
|
||||
.endm
|
||||
|
||||
.macro MCOUNT_RESTORE_FRAME
|
||||
movq 48(%rsp), %r9
|
||||
movq 40(%rsp), %r8
|
||||
movq 32(%rsp), %rdi
|
||||
movq 24(%rsp), %rsi
|
||||
movq 16(%rsp), %rdx
|
||||
movq 8(%rsp), %rcx
|
||||
movq (%rsp), %rax
|
||||
addq $0x38, %rsp
|
||||
.macro MCOUNT_RESTORE_FRAME skip=0
|
||||
movq R9(%rsp), %r9
|
||||
movq R8(%rsp), %r8
|
||||
movq RDI(%rsp), %rdi
|
||||
movq RSI(%rsp), %rsi
|
||||
movq RDX(%rsp), %rdx
|
||||
movq RCX(%rsp), %rcx
|
||||
movq RAX(%rsp), %rax
|
||||
addq $(SS+8-\skip), %rsp
|
||||
.endm
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_FUNCTION_TRACER
|
||||
#define MCOUNT_ADDR ((long)(mcount))
|
||||
#ifdef CC_USING_FENTRY
|
||||
# define MCOUNT_ADDR ((long)(__fentry__))
|
||||
#else
|
||||
# define MCOUNT_ADDR ((long)(mcount))
|
||||
#endif
|
||||
#define MCOUNT_INSN_SIZE 5 /* sizeof mcount call */
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
#define ARCH_SUPPORTS_FTRACE_OPS 1
|
||||
#define ARCH_SUPPORTS_FTRACE_SAVE_REGS
|
||||
#endif
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
extern void mcount(void);
|
||||
extern atomic_t modifying_ftrace_code;
|
||||
extern void __fentry__(void);
|
||||
|
||||
static inline unsigned long ftrace_call_adjust(unsigned long addr)
|
||||
{
|
||||
|
@ -27,6 +27,7 @@
|
||||
#include <asm/insn.h>
|
||||
|
||||
#define __ARCH_WANT_KPROBES_INSN_SLOT
|
||||
#define ARCH_SUPPORTS_KPROBES_ON_FTRACE
|
||||
|
||||
struct pt_regs;
|
||||
struct kprobe;
|
||||
|
@ -9,6 +9,22 @@
|
||||
#include <linux/types.h>
|
||||
#include <linux/ioctl.h>
|
||||
|
||||
#define DE_VECTOR 0
|
||||
#define DB_VECTOR 1
|
||||
#define BP_VECTOR 3
|
||||
#define OF_VECTOR 4
|
||||
#define BR_VECTOR 5
|
||||
#define UD_VECTOR 6
|
||||
#define NM_VECTOR 7
|
||||
#define DF_VECTOR 8
|
||||
#define TS_VECTOR 10
|
||||
#define NP_VECTOR 11
|
||||
#define SS_VECTOR 12
|
||||
#define GP_VECTOR 13
|
||||
#define PF_VECTOR 14
|
||||
#define MF_VECTOR 16
|
||||
#define MC_VECTOR 18
|
||||
|
||||
/* Select x86 specific features in <linux/kvm.h> */
|
||||
#define __KVM_HAVE_PIT
|
||||
#define __KVM_HAVE_IOAPIC
|
||||
|
@ -75,22 +75,6 @@
|
||||
#define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1))
|
||||
#define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE)
|
||||
|
||||
#define DE_VECTOR 0
|
||||
#define DB_VECTOR 1
|
||||
#define BP_VECTOR 3
|
||||
#define OF_VECTOR 4
|
||||
#define BR_VECTOR 5
|
||||
#define UD_VECTOR 6
|
||||
#define NM_VECTOR 7
|
||||
#define DF_VECTOR 8
|
||||
#define TS_VECTOR 10
|
||||
#define NP_VECTOR 11
|
||||
#define SS_VECTOR 12
|
||||
#define GP_VECTOR 13
|
||||
#define PF_VECTOR 14
|
||||
#define MF_VECTOR 16
|
||||
#define MC_VECTOR 18
|
||||
|
||||
#define SELECTOR_TI_MASK (1 << 2)
|
||||
#define SELECTOR_RPL_MASK 0x03
|
||||
|
||||
|
@ -262,4 +262,6 @@ static inline void perf_check_microcode(void) { }
|
||||
static inline void amd_pmu_disable_virt(void) { }
|
||||
#endif
|
||||
|
||||
#define arch_perf_out_copy_user copy_from_user_nmi
|
||||
|
||||
#endif /* _ASM_X86_PERF_EVENT_H */
|
||||
|
33
arch/x86/include/asm/perf_regs.h
Normal file
33
arch/x86/include/asm/perf_regs.h
Normal file
@ -0,0 +1,33 @@
|
||||
#ifndef _ASM_X86_PERF_REGS_H
|
||||
#define _ASM_X86_PERF_REGS_H
|
||||
|
||||
enum perf_event_x86_regs {
|
||||
PERF_REG_X86_AX,
|
||||
PERF_REG_X86_BX,
|
||||
PERF_REG_X86_CX,
|
||||
PERF_REG_X86_DX,
|
||||
PERF_REG_X86_SI,
|
||||
PERF_REG_X86_DI,
|
||||
PERF_REG_X86_BP,
|
||||
PERF_REG_X86_SP,
|
||||
PERF_REG_X86_IP,
|
||||
PERF_REG_X86_FLAGS,
|
||||
PERF_REG_X86_CS,
|
||||
PERF_REG_X86_SS,
|
||||
PERF_REG_X86_DS,
|
||||
PERF_REG_X86_ES,
|
||||
PERF_REG_X86_FS,
|
||||
PERF_REG_X86_GS,
|
||||
PERF_REG_X86_R8,
|
||||
PERF_REG_X86_R9,
|
||||
PERF_REG_X86_R10,
|
||||
PERF_REG_X86_R11,
|
||||
PERF_REG_X86_R12,
|
||||
PERF_REG_X86_R13,
|
||||
PERF_REG_X86_R14,
|
||||
PERF_REG_X86_R15,
|
||||
|
||||
PERF_REG_X86_32_MAX = PERF_REG_X86_GS + 1,
|
||||
PERF_REG_X86_64_MAX = PERF_REG_X86_R15 + 1,
|
||||
};
|
||||
#endif /* _ASM_X86_PERF_REGS_H */
|
@ -759,6 +759,8 @@ static inline void update_debugctlmsr(unsigned long debugctlmsr)
|
||||
wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctlmsr);
|
||||
}
|
||||
|
||||
extern void set_task_blockstep(struct task_struct *task, bool on);
|
||||
|
||||
/*
|
||||
* from system description table in BIOS. Mostly for MCA use, but
|
||||
* others may find it useful:
|
||||
|
@ -1,6 +1,135 @@
|
||||
#ifndef __SVM_H
|
||||
#define __SVM_H
|
||||
|
||||
#define SVM_EXIT_READ_CR0 0x000
|
||||
#define SVM_EXIT_READ_CR3 0x003
|
||||
#define SVM_EXIT_READ_CR4 0x004
|
||||
#define SVM_EXIT_READ_CR8 0x008
|
||||
#define SVM_EXIT_WRITE_CR0 0x010
|
||||
#define SVM_EXIT_WRITE_CR3 0x013
|
||||
#define SVM_EXIT_WRITE_CR4 0x014
|
||||
#define SVM_EXIT_WRITE_CR8 0x018
|
||||
#define SVM_EXIT_READ_DR0 0x020
|
||||
#define SVM_EXIT_READ_DR1 0x021
|
||||
#define SVM_EXIT_READ_DR2 0x022
|
||||
#define SVM_EXIT_READ_DR3 0x023
|
||||
#define SVM_EXIT_READ_DR4 0x024
|
||||
#define SVM_EXIT_READ_DR5 0x025
|
||||
#define SVM_EXIT_READ_DR6 0x026
|
||||
#define SVM_EXIT_READ_DR7 0x027
|
||||
#define SVM_EXIT_WRITE_DR0 0x030
|
||||
#define SVM_EXIT_WRITE_DR1 0x031
|
||||
#define SVM_EXIT_WRITE_DR2 0x032
|
||||
#define SVM_EXIT_WRITE_DR3 0x033
|
||||
#define SVM_EXIT_WRITE_DR4 0x034
|
||||
#define SVM_EXIT_WRITE_DR5 0x035
|
||||
#define SVM_EXIT_WRITE_DR6 0x036
|
||||
#define SVM_EXIT_WRITE_DR7 0x037
|
||||
#define SVM_EXIT_EXCP_BASE 0x040
|
||||
#define SVM_EXIT_INTR 0x060
|
||||
#define SVM_EXIT_NMI 0x061
|
||||
#define SVM_EXIT_SMI 0x062
|
||||
#define SVM_EXIT_INIT 0x063
|
||||
#define SVM_EXIT_VINTR 0x064
|
||||
#define SVM_EXIT_CR0_SEL_WRITE 0x065
|
||||
#define SVM_EXIT_IDTR_READ 0x066
|
||||
#define SVM_EXIT_GDTR_READ 0x067
|
||||
#define SVM_EXIT_LDTR_READ 0x068
|
||||
#define SVM_EXIT_TR_READ 0x069
|
||||
#define SVM_EXIT_IDTR_WRITE 0x06a
|
||||
#define SVM_EXIT_GDTR_WRITE 0x06b
|
||||
#define SVM_EXIT_LDTR_WRITE 0x06c
|
||||
#define SVM_EXIT_TR_WRITE 0x06d
|
||||
#define SVM_EXIT_RDTSC 0x06e
|
||||
#define SVM_EXIT_RDPMC 0x06f
|
||||
#define SVM_EXIT_PUSHF 0x070
|
||||
#define SVM_EXIT_POPF 0x071
|
||||
#define SVM_EXIT_CPUID 0x072
|
||||
#define SVM_EXIT_RSM 0x073
|
||||
#define SVM_EXIT_IRET 0x074
|
||||
#define SVM_EXIT_SWINT 0x075
|
||||
#define SVM_EXIT_INVD 0x076
|
||||
#define SVM_EXIT_PAUSE 0x077
|
||||
#define SVM_EXIT_HLT 0x078
|
||||
#define SVM_EXIT_INVLPG 0x079
|
||||
#define SVM_EXIT_INVLPGA 0x07a
|
||||
#define SVM_EXIT_IOIO 0x07b
|
||||
#define SVM_EXIT_MSR 0x07c
|
||||
#define SVM_EXIT_TASK_SWITCH 0x07d
|
||||
#define SVM_EXIT_FERR_FREEZE 0x07e
|
||||
#define SVM_EXIT_SHUTDOWN 0x07f
|
||||
#define SVM_EXIT_VMRUN 0x080
|
||||
#define SVM_EXIT_VMMCALL 0x081
|
||||
#define SVM_EXIT_VMLOAD 0x082
|
||||
#define SVM_EXIT_VMSAVE 0x083
|
||||
#define SVM_EXIT_STGI 0x084
|
||||
#define SVM_EXIT_CLGI 0x085
|
||||
#define SVM_EXIT_SKINIT 0x086
|
||||
#define SVM_EXIT_RDTSCP 0x087
|
||||
#define SVM_EXIT_ICEBP 0x088
|
||||
#define SVM_EXIT_WBINVD 0x089
|
||||
#define SVM_EXIT_MONITOR 0x08a
|
||||
#define SVM_EXIT_MWAIT 0x08b
|
||||
#define SVM_EXIT_MWAIT_COND 0x08c
|
||||
#define SVM_EXIT_XSETBV 0x08d
|
||||
#define SVM_EXIT_NPF 0x400
|
||||
|
||||
#define SVM_EXIT_ERR -1
|
||||
|
||||
#define SVM_EXIT_REASONS \
|
||||
{ SVM_EXIT_READ_CR0, "read_cr0" }, \
|
||||
{ SVM_EXIT_READ_CR3, "read_cr3" }, \
|
||||
{ SVM_EXIT_READ_CR4, "read_cr4" }, \
|
||||
{ SVM_EXIT_READ_CR8, "read_cr8" }, \
|
||||
{ SVM_EXIT_WRITE_CR0, "write_cr0" }, \
|
||||
{ SVM_EXIT_WRITE_CR3, "write_cr3" }, \
|
||||
{ SVM_EXIT_WRITE_CR4, "write_cr4" }, \
|
||||
{ SVM_EXIT_WRITE_CR8, "write_cr8" }, \
|
||||
{ SVM_EXIT_READ_DR0, "read_dr0" }, \
|
||||
{ SVM_EXIT_READ_DR1, "read_dr1" }, \
|
||||
{ SVM_EXIT_READ_DR2, "read_dr2" }, \
|
||||
{ SVM_EXIT_READ_DR3, "read_dr3" }, \
|
||||
{ SVM_EXIT_WRITE_DR0, "write_dr0" }, \
|
||||
{ SVM_EXIT_WRITE_DR1, "write_dr1" }, \
|
||||
{ SVM_EXIT_WRITE_DR2, "write_dr2" }, \
|
||||
{ SVM_EXIT_WRITE_DR3, "write_dr3" }, \
|
||||
{ SVM_EXIT_WRITE_DR5, "write_dr5" }, \
|
||||
{ SVM_EXIT_WRITE_DR7, "write_dr7" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + DB_VECTOR, "DB excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + BP_VECTOR, "BP excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + UD_VECTOR, "UD excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + PF_VECTOR, "PF excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + NM_VECTOR, "NM excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + MC_VECTOR, "MC excp" }, \
|
||||
{ SVM_EXIT_INTR, "interrupt" }, \
|
||||
{ SVM_EXIT_NMI, "nmi" }, \
|
||||
{ SVM_EXIT_SMI, "smi" }, \
|
||||
{ SVM_EXIT_INIT, "init" }, \
|
||||
{ SVM_EXIT_VINTR, "vintr" }, \
|
||||
{ SVM_EXIT_CPUID, "cpuid" }, \
|
||||
{ SVM_EXIT_INVD, "invd" }, \
|
||||
{ SVM_EXIT_HLT, "hlt" }, \
|
||||
{ SVM_EXIT_INVLPG, "invlpg" }, \
|
||||
{ SVM_EXIT_INVLPGA, "invlpga" }, \
|
||||
{ SVM_EXIT_IOIO, "io" }, \
|
||||
{ SVM_EXIT_MSR, "msr" }, \
|
||||
{ SVM_EXIT_TASK_SWITCH, "task_switch" }, \
|
||||
{ SVM_EXIT_SHUTDOWN, "shutdown" }, \
|
||||
{ SVM_EXIT_VMRUN, "vmrun" }, \
|
||||
{ SVM_EXIT_VMMCALL, "hypercall" }, \
|
||||
{ SVM_EXIT_VMLOAD, "vmload" }, \
|
||||
{ SVM_EXIT_VMSAVE, "vmsave" }, \
|
||||
{ SVM_EXIT_STGI, "stgi" }, \
|
||||
{ SVM_EXIT_CLGI, "clgi" }, \
|
||||
{ SVM_EXIT_SKINIT, "skinit" }, \
|
||||
{ SVM_EXIT_WBINVD, "wbinvd" }, \
|
||||
{ SVM_EXIT_MONITOR, "monitor" }, \
|
||||
{ SVM_EXIT_MWAIT, "mwait" }, \
|
||||
{ SVM_EXIT_XSETBV, "xsetbv" }, \
|
||||
{ SVM_EXIT_NPF, "npf" }
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
enum {
|
||||
INTERCEPT_INTR,
|
||||
INTERCEPT_NMI,
|
||||
@ -264,81 +393,6 @@ struct __attribute__ ((__packed__)) vmcb {
|
||||
|
||||
#define SVM_EXITINFO_REG_MASK 0x0F
|
||||
|
||||
#define SVM_EXIT_READ_CR0 0x000
|
||||
#define SVM_EXIT_READ_CR3 0x003
|
||||
#define SVM_EXIT_READ_CR4 0x004
|
||||
#define SVM_EXIT_READ_CR8 0x008
|
||||
#define SVM_EXIT_WRITE_CR0 0x010
|
||||
#define SVM_EXIT_WRITE_CR3 0x013
|
||||
#define SVM_EXIT_WRITE_CR4 0x014
|
||||
#define SVM_EXIT_WRITE_CR8 0x018
|
||||
#define SVM_EXIT_READ_DR0 0x020
|
||||
#define SVM_EXIT_READ_DR1 0x021
|
||||
#define SVM_EXIT_READ_DR2 0x022
|
||||
#define SVM_EXIT_READ_DR3 0x023
|
||||
#define SVM_EXIT_READ_DR4 0x024
|
||||
#define SVM_EXIT_READ_DR5 0x025
|
||||
#define SVM_EXIT_READ_DR6 0x026
|
||||
#define SVM_EXIT_READ_DR7 0x027
|
||||
#define SVM_EXIT_WRITE_DR0 0x030
|
||||
#define SVM_EXIT_WRITE_DR1 0x031
|
||||
#define SVM_EXIT_WRITE_DR2 0x032
|
||||
#define SVM_EXIT_WRITE_DR3 0x033
|
||||
#define SVM_EXIT_WRITE_DR4 0x034
|
||||
#define SVM_EXIT_WRITE_DR5 0x035
|
||||
#define SVM_EXIT_WRITE_DR6 0x036
|
||||
#define SVM_EXIT_WRITE_DR7 0x037
|
||||
#define SVM_EXIT_EXCP_BASE 0x040
|
||||
#define SVM_EXIT_INTR 0x060
|
||||
#define SVM_EXIT_NMI 0x061
|
||||
#define SVM_EXIT_SMI 0x062
|
||||
#define SVM_EXIT_INIT 0x063
|
||||
#define SVM_EXIT_VINTR 0x064
|
||||
#define SVM_EXIT_CR0_SEL_WRITE 0x065
|
||||
#define SVM_EXIT_IDTR_READ 0x066
|
||||
#define SVM_EXIT_GDTR_READ 0x067
|
||||
#define SVM_EXIT_LDTR_READ 0x068
|
||||
#define SVM_EXIT_TR_READ 0x069
|
||||
#define SVM_EXIT_IDTR_WRITE 0x06a
|
||||
#define SVM_EXIT_GDTR_WRITE 0x06b
|
||||
#define SVM_EXIT_LDTR_WRITE 0x06c
|
||||
#define SVM_EXIT_TR_WRITE 0x06d
|
||||
#define SVM_EXIT_RDTSC 0x06e
|
||||
#define SVM_EXIT_RDPMC 0x06f
|
||||
#define SVM_EXIT_PUSHF 0x070
|
||||
#define SVM_EXIT_POPF 0x071
|
||||
#define SVM_EXIT_CPUID 0x072
|
||||
#define SVM_EXIT_RSM 0x073
|
||||
#define SVM_EXIT_IRET 0x074
|
||||
#define SVM_EXIT_SWINT 0x075
|
||||
#define SVM_EXIT_INVD 0x076
|
||||
#define SVM_EXIT_PAUSE 0x077
|
||||
#define SVM_EXIT_HLT 0x078
|
||||
#define SVM_EXIT_INVLPG 0x079
|
||||
#define SVM_EXIT_INVLPGA 0x07a
|
||||
#define SVM_EXIT_IOIO 0x07b
|
||||
#define SVM_EXIT_MSR 0x07c
|
||||
#define SVM_EXIT_TASK_SWITCH 0x07d
|
||||
#define SVM_EXIT_FERR_FREEZE 0x07e
|
||||
#define SVM_EXIT_SHUTDOWN 0x07f
|
||||
#define SVM_EXIT_VMRUN 0x080
|
||||
#define SVM_EXIT_VMMCALL 0x081
|
||||
#define SVM_EXIT_VMLOAD 0x082
|
||||
#define SVM_EXIT_VMSAVE 0x083
|
||||
#define SVM_EXIT_STGI 0x084
|
||||
#define SVM_EXIT_CLGI 0x085
|
||||
#define SVM_EXIT_SKINIT 0x086
|
||||
#define SVM_EXIT_RDTSCP 0x087
|
||||
#define SVM_EXIT_ICEBP 0x088
|
||||
#define SVM_EXIT_WBINVD 0x089
|
||||
#define SVM_EXIT_MONITOR 0x08a
|
||||
#define SVM_EXIT_MWAIT 0x08b
|
||||
#define SVM_EXIT_MWAIT_COND 0x08c
|
||||
#define SVM_EXIT_XSETBV 0x08d
|
||||
#define SVM_EXIT_NPF 0x400
|
||||
|
||||
#define SVM_EXIT_ERR -1
|
||||
|
||||
#define SVM_CR0_SELECTIVE_MASK (X86_CR0_TS | X86_CR0_MP)
|
||||
|
||||
#define SVM_VMLOAD ".byte 0x0f, 0x01, 0xda"
|
||||
@ -350,3 +404,4 @@ struct __attribute__ ((__packed__)) vmcb {
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
@ -42,10 +42,11 @@ struct arch_uprobe {
|
||||
};
|
||||
|
||||
struct arch_uprobe_task {
|
||||
unsigned long saved_trap_nr;
|
||||
#ifdef CONFIG_X86_64
|
||||
unsigned long saved_scratch_register;
|
||||
#endif
|
||||
unsigned int saved_trap_nr;
|
||||
unsigned int saved_tf;
|
||||
};
|
||||
|
||||
extern int arch_uprobe_analyze_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long addr);
|
||||
|
@ -25,6 +25,88 @@
|
||||
*
|
||||
*/
|
||||
|
||||
#define VMX_EXIT_REASONS_FAILED_VMENTRY 0x80000000
|
||||
|
||||
#define EXIT_REASON_EXCEPTION_NMI 0
|
||||
#define EXIT_REASON_EXTERNAL_INTERRUPT 1
|
||||
#define EXIT_REASON_TRIPLE_FAULT 2
|
||||
|
||||
#define EXIT_REASON_PENDING_INTERRUPT 7
|
||||
#define EXIT_REASON_NMI_WINDOW 8
|
||||
#define EXIT_REASON_TASK_SWITCH 9
|
||||
#define EXIT_REASON_CPUID 10
|
||||
#define EXIT_REASON_HLT 12
|
||||
#define EXIT_REASON_INVD 13
|
||||
#define EXIT_REASON_INVLPG 14
|
||||
#define EXIT_REASON_RDPMC 15
|
||||
#define EXIT_REASON_RDTSC 16
|
||||
#define EXIT_REASON_VMCALL 18
|
||||
#define EXIT_REASON_VMCLEAR 19
|
||||
#define EXIT_REASON_VMLAUNCH 20
|
||||
#define EXIT_REASON_VMPTRLD 21
|
||||
#define EXIT_REASON_VMPTRST 22
|
||||
#define EXIT_REASON_VMREAD 23
|
||||
#define EXIT_REASON_VMRESUME 24
|
||||
#define EXIT_REASON_VMWRITE 25
|
||||
#define EXIT_REASON_VMOFF 26
|
||||
#define EXIT_REASON_VMON 27
|
||||
#define EXIT_REASON_CR_ACCESS 28
|
||||
#define EXIT_REASON_DR_ACCESS 29
|
||||
#define EXIT_REASON_IO_INSTRUCTION 30
|
||||
#define EXIT_REASON_MSR_READ 31
|
||||
#define EXIT_REASON_MSR_WRITE 32
|
||||
#define EXIT_REASON_INVALID_STATE 33
|
||||
#define EXIT_REASON_MWAIT_INSTRUCTION 36
|
||||
#define EXIT_REASON_MONITOR_INSTRUCTION 39
|
||||
#define EXIT_REASON_PAUSE_INSTRUCTION 40
|
||||
#define EXIT_REASON_MCE_DURING_VMENTRY 41
|
||||
#define EXIT_REASON_TPR_BELOW_THRESHOLD 43
|
||||
#define EXIT_REASON_APIC_ACCESS 44
|
||||
#define EXIT_REASON_EPT_VIOLATION 48
|
||||
#define EXIT_REASON_EPT_MISCONFIG 49
|
||||
#define EXIT_REASON_WBINVD 54
|
||||
#define EXIT_REASON_XSETBV 55
|
||||
#define EXIT_REASON_INVPCID 58
|
||||
|
||||
#define VMX_EXIT_REASONS \
|
||||
{ EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \
|
||||
{ EXIT_REASON_EXTERNAL_INTERRUPT, "EXTERNAL_INTERRUPT" }, \
|
||||
{ EXIT_REASON_TRIPLE_FAULT, "TRIPLE_FAULT" }, \
|
||||
{ EXIT_REASON_PENDING_INTERRUPT, "PENDING_INTERRUPT" }, \
|
||||
{ EXIT_REASON_NMI_WINDOW, "NMI_WINDOW" }, \
|
||||
{ EXIT_REASON_TASK_SWITCH, "TASK_SWITCH" }, \
|
||||
{ EXIT_REASON_CPUID, "CPUID" }, \
|
||||
{ EXIT_REASON_HLT, "HLT" }, \
|
||||
{ EXIT_REASON_INVLPG, "INVLPG" }, \
|
||||
{ EXIT_REASON_RDPMC, "RDPMC" }, \
|
||||
{ EXIT_REASON_RDTSC, "RDTSC" }, \
|
||||
{ EXIT_REASON_VMCALL, "VMCALL" }, \
|
||||
{ EXIT_REASON_VMCLEAR, "VMCLEAR" }, \
|
||||
{ EXIT_REASON_VMLAUNCH, "VMLAUNCH" }, \
|
||||
{ EXIT_REASON_VMPTRLD, "VMPTRLD" }, \
|
||||
{ EXIT_REASON_VMPTRST, "VMPTRST" }, \
|
||||
{ EXIT_REASON_VMREAD, "VMREAD" }, \
|
||||
{ EXIT_REASON_VMRESUME, "VMRESUME" }, \
|
||||
{ EXIT_REASON_VMWRITE, "VMWRITE" }, \
|
||||
{ EXIT_REASON_VMOFF, "VMOFF" }, \
|
||||
{ EXIT_REASON_VMON, "VMON" }, \
|
||||
{ EXIT_REASON_CR_ACCESS, "CR_ACCESS" }, \
|
||||
{ EXIT_REASON_DR_ACCESS, "DR_ACCESS" }, \
|
||||
{ EXIT_REASON_IO_INSTRUCTION, "IO_INSTRUCTION" }, \
|
||||
{ EXIT_REASON_MSR_READ, "MSR_READ" }, \
|
||||
{ EXIT_REASON_MSR_WRITE, "MSR_WRITE" }, \
|
||||
{ EXIT_REASON_MWAIT_INSTRUCTION, "MWAIT_INSTRUCTION" }, \
|
||||
{ EXIT_REASON_MONITOR_INSTRUCTION, "MONITOR_INSTRUCTION" }, \
|
||||
{ EXIT_REASON_PAUSE_INSTRUCTION, "PAUSE_INSTRUCTION" }, \
|
||||
{ EXIT_REASON_MCE_DURING_VMENTRY, "MCE_DURING_VMENTRY" }, \
|
||||
{ EXIT_REASON_TPR_BELOW_THRESHOLD, "TPR_BELOW_THRESHOLD" }, \
|
||||
{ EXIT_REASON_APIC_ACCESS, "APIC_ACCESS" }, \
|
||||
{ EXIT_REASON_EPT_VIOLATION, "EPT_VIOLATION" }, \
|
||||
{ EXIT_REASON_EPT_MISCONFIG, "EPT_MISCONFIG" }, \
|
||||
{ EXIT_REASON_WBINVD, "WBINVD" }
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
/*
|
||||
@ -241,49 +323,6 @@ enum vmcs_field {
|
||||
HOST_RIP = 0x00006c16,
|
||||
};
|
||||
|
||||
#define VMX_EXIT_REASONS_FAILED_VMENTRY 0x80000000
|
||||
|
||||
#define EXIT_REASON_EXCEPTION_NMI 0
|
||||
#define EXIT_REASON_EXTERNAL_INTERRUPT 1
|
||||
#define EXIT_REASON_TRIPLE_FAULT 2
|
||||
|
||||
#define EXIT_REASON_PENDING_INTERRUPT 7
|
||||
#define EXIT_REASON_NMI_WINDOW 8
|
||||
#define EXIT_REASON_TASK_SWITCH 9
|
||||
#define EXIT_REASON_CPUID 10
|
||||
#define EXIT_REASON_HLT 12
|
||||
#define EXIT_REASON_INVD 13
|
||||
#define EXIT_REASON_INVLPG 14
|
||||
#define EXIT_REASON_RDPMC 15
|
||||
#define EXIT_REASON_RDTSC 16
|
||||
#define EXIT_REASON_VMCALL 18
|
||||
#define EXIT_REASON_VMCLEAR 19
|
||||
#define EXIT_REASON_VMLAUNCH 20
|
||||
#define EXIT_REASON_VMPTRLD 21
|
||||
#define EXIT_REASON_VMPTRST 22
|
||||
#define EXIT_REASON_VMREAD 23
|
||||
#define EXIT_REASON_VMRESUME 24
|
||||
#define EXIT_REASON_VMWRITE 25
|
||||
#define EXIT_REASON_VMOFF 26
|
||||
#define EXIT_REASON_VMON 27
|
||||
#define EXIT_REASON_CR_ACCESS 28
|
||||
#define EXIT_REASON_DR_ACCESS 29
|
||||
#define EXIT_REASON_IO_INSTRUCTION 30
|
||||
#define EXIT_REASON_MSR_READ 31
|
||||
#define EXIT_REASON_MSR_WRITE 32
|
||||
#define EXIT_REASON_INVALID_STATE 33
|
||||
#define EXIT_REASON_MWAIT_INSTRUCTION 36
|
||||
#define EXIT_REASON_MONITOR_INSTRUCTION 39
|
||||
#define EXIT_REASON_PAUSE_INSTRUCTION 40
|
||||
#define EXIT_REASON_MCE_DURING_VMENTRY 41
|
||||
#define EXIT_REASON_TPR_BELOW_THRESHOLD 43
|
||||
#define EXIT_REASON_APIC_ACCESS 44
|
||||
#define EXIT_REASON_EPT_VIOLATION 48
|
||||
#define EXIT_REASON_EPT_MISCONFIG 49
|
||||
#define EXIT_REASON_WBINVD 54
|
||||
#define EXIT_REASON_XSETBV 55
|
||||
#define EXIT_REASON_INVPCID 58
|
||||
|
||||
/*
|
||||
* Interruption-information format
|
||||
*/
|
||||
@ -488,3 +527,5 @@ enum vm_instruction_error_number {
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
@ -100,6 +100,8 @@ obj-$(CONFIG_SWIOTLB) += pci-swiotlb.o
|
||||
obj-$(CONFIG_OF) += devicetree.o
|
||||
obj-$(CONFIG_UPROBES) += uprobes.o
|
||||
|
||||
obj-$(CONFIG_PERF_EVENTS) += perf_regs.o
|
||||
|
||||
###
|
||||
# 64 bit specific files
|
||||
ifeq ($(CONFIG_X86_64),y)
|
||||
|
@ -2347,6 +2347,27 @@ int uncore_pmu_event_init(struct perf_event *event)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t uncore_get_attr_cpumask(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
int n = cpulist_scnprintf(buf, PAGE_SIZE - 2, &uncore_cpu_mask);
|
||||
|
||||
buf[n++] = '\n';
|
||||
buf[n] = '\0';
|
||||
return n;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(cpumask, S_IRUGO, uncore_get_attr_cpumask, NULL);
|
||||
|
||||
static struct attribute *uncore_pmu_attrs[] = {
|
||||
&dev_attr_cpumask.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute_group uncore_pmu_attr_group = {
|
||||
.attrs = uncore_pmu_attrs,
|
||||
};
|
||||
|
||||
static int __init uncore_pmu_register(struct intel_uncore_pmu *pmu)
|
||||
{
|
||||
int ret;
|
||||
@ -2384,8 +2405,8 @@ static void __init uncore_type_exit(struct intel_uncore_type *type)
|
||||
free_percpu(type->pmus[i].box);
|
||||
kfree(type->pmus);
|
||||
type->pmus = NULL;
|
||||
kfree(type->attr_groups[1]);
|
||||
type->attr_groups[1] = NULL;
|
||||
kfree(type->events_group);
|
||||
type->events_group = NULL;
|
||||
}
|
||||
|
||||
static void __init uncore_types_exit(struct intel_uncore_type **types)
|
||||
@ -2437,9 +2458,10 @@ static int __init uncore_type_init(struct intel_uncore_type *type)
|
||||
for (j = 0; j < i; j++)
|
||||
attrs[j] = &type->event_descs[j].attr.attr;
|
||||
|
||||
type->attr_groups[1] = events_group;
|
||||
type->events_group = events_group;
|
||||
}
|
||||
|
||||
type->pmu_group = &uncore_pmu_attr_group;
|
||||
type->pmus = pmus;
|
||||
return 0;
|
||||
fail:
|
||||
|
@ -369,10 +369,12 @@ struct intel_uncore_type {
|
||||
struct intel_uncore_pmu *pmus;
|
||||
struct intel_uncore_ops *ops;
|
||||
struct uncore_event_desc *event_descs;
|
||||
const struct attribute_group *attr_groups[3];
|
||||
const struct attribute_group *attr_groups[4];
|
||||
};
|
||||
|
||||
#define format_group attr_groups[0]
|
||||
#define pmu_group attr_groups[0]
|
||||
#define format_group attr_groups[1]
|
||||
#define events_group attr_groups[2]
|
||||
|
||||
struct intel_uncore_ops {
|
||||
void (*init_box)(struct intel_uncore_box *);
|
||||
|
@ -1109,17 +1109,21 @@ ENTRY(ftrace_caller)
|
||||
pushl %eax
|
||||
pushl %ecx
|
||||
pushl %edx
|
||||
movl 0xc(%esp), %eax
|
||||
pushl $0 /* Pass NULL as regs pointer */
|
||||
movl 4*4(%esp), %eax
|
||||
movl 0x4(%ebp), %edx
|
||||
leal function_trace_op, %ecx
|
||||
subl $MCOUNT_INSN_SIZE, %eax
|
||||
|
||||
.globl ftrace_call
|
||||
ftrace_call:
|
||||
call ftrace_stub
|
||||
|
||||
addl $4,%esp /* skip NULL pointer */
|
||||
popl %edx
|
||||
popl %ecx
|
||||
popl %eax
|
||||
ftrace_ret:
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
.globl ftrace_graph_call
|
||||
ftrace_graph_call:
|
||||
@ -1131,6 +1135,71 @@ ftrace_stub:
|
||||
ret
|
||||
END(ftrace_caller)
|
||||
|
||||
ENTRY(ftrace_regs_caller)
|
||||
pushf /* push flags before compare (in cs location) */
|
||||
cmpl $0, function_trace_stop
|
||||
jne ftrace_restore_flags
|
||||
|
||||
/*
|
||||
* i386 does not save SS and ESP when coming from kernel.
|
||||
* Instead, to get sp, ®s->sp is used (see ptrace.h).
|
||||
* Unfortunately, that means eflags must be at the same location
|
||||
* as the current return ip is. We move the return ip into the
|
||||
* ip location, and move flags into the return ip location.
|
||||
*/
|
||||
pushl 4(%esp) /* save return ip into ip slot */
|
||||
|
||||
pushl $0 /* Load 0 into orig_ax */
|
||||
pushl %gs
|
||||
pushl %fs
|
||||
pushl %es
|
||||
pushl %ds
|
||||
pushl %eax
|
||||
pushl %ebp
|
||||
pushl %edi
|
||||
pushl %esi
|
||||
pushl %edx
|
||||
pushl %ecx
|
||||
pushl %ebx
|
||||
|
||||
movl 13*4(%esp), %eax /* Get the saved flags */
|
||||
movl %eax, 14*4(%esp) /* Move saved flags into regs->flags location */
|
||||
/* clobbering return ip */
|
||||
movl $__KERNEL_CS,13*4(%esp)
|
||||
|
||||
movl 12*4(%esp), %eax /* Load ip (1st parameter) */
|
||||
subl $MCOUNT_INSN_SIZE, %eax /* Adjust ip */
|
||||
movl 0x4(%ebp), %edx /* Load parent ip (2nd parameter) */
|
||||
leal function_trace_op, %ecx /* Save ftrace_pos in 3rd parameter */
|
||||
pushl %esp /* Save pt_regs as 4th parameter */
|
||||
|
||||
GLOBAL(ftrace_regs_call)
|
||||
call ftrace_stub
|
||||
|
||||
addl $4, %esp /* Skip pt_regs */
|
||||
movl 14*4(%esp), %eax /* Move flags back into cs */
|
||||
movl %eax, 13*4(%esp) /* Needed to keep addl from modifying flags */
|
||||
movl 12*4(%esp), %eax /* Get return ip from regs->ip */
|
||||
movl %eax, 14*4(%esp) /* Put return ip back for ret */
|
||||
|
||||
popl %ebx
|
||||
popl %ecx
|
||||
popl %edx
|
||||
popl %esi
|
||||
popl %edi
|
||||
popl %ebp
|
||||
popl %eax
|
||||
popl %ds
|
||||
popl %es
|
||||
popl %fs
|
||||
popl %gs
|
||||
addl $8, %esp /* Skip orig_ax and ip */
|
||||
popf /* Pop flags at end (no addl to corrupt flags) */
|
||||
jmp ftrace_ret
|
||||
|
||||
ftrace_restore_flags:
|
||||
popf
|
||||
jmp ftrace_stub
|
||||
#else /* ! CONFIG_DYNAMIC_FTRACE */
|
||||
|
||||
ENTRY(mcount)
|
||||
@ -1171,9 +1240,6 @@ END(mcount)
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
ENTRY(ftrace_graph_caller)
|
||||
cmpl $0, function_trace_stop
|
||||
jne ftrace_stub
|
||||
|
||||
pushl %eax
|
||||
pushl %ecx
|
||||
pushl %edx
|
||||
|
@ -69,25 +69,51 @@
|
||||
.section .entry.text, "ax"
|
||||
|
||||
#ifdef CONFIG_FUNCTION_TRACER
|
||||
|
||||
#ifdef CC_USING_FENTRY
|
||||
# define function_hook __fentry__
|
||||
#else
|
||||
# define function_hook mcount
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
ENTRY(mcount)
|
||||
|
||||
ENTRY(function_hook)
|
||||
retq
|
||||
END(mcount)
|
||||
END(function_hook)
|
||||
|
||||
/* skip is set if stack has been adjusted */
|
||||
.macro ftrace_caller_setup skip=0
|
||||
MCOUNT_SAVE_FRAME \skip
|
||||
|
||||
/* Load the ftrace_ops into the 3rd parameter */
|
||||
leaq function_trace_op, %rdx
|
||||
|
||||
/* Load ip into the first parameter */
|
||||
movq RIP(%rsp), %rdi
|
||||
subq $MCOUNT_INSN_SIZE, %rdi
|
||||
/* Load the parent_ip into the second parameter */
|
||||
#ifdef CC_USING_FENTRY
|
||||
movq SS+16(%rsp), %rsi
|
||||
#else
|
||||
movq 8(%rbp), %rsi
|
||||
#endif
|
||||
.endm
|
||||
|
||||
ENTRY(ftrace_caller)
|
||||
/* Check if tracing was disabled (quick check) */
|
||||
cmpl $0, function_trace_stop
|
||||
jne ftrace_stub
|
||||
|
||||
MCOUNT_SAVE_FRAME
|
||||
|
||||
movq 0x38(%rsp), %rdi
|
||||
movq 8(%rbp), %rsi
|
||||
subq $MCOUNT_INSN_SIZE, %rdi
|
||||
ftrace_caller_setup
|
||||
/* regs go into 4th parameter (but make it NULL) */
|
||||
movq $0, %rcx
|
||||
|
||||
GLOBAL(ftrace_call)
|
||||
call ftrace_stub
|
||||
|
||||
MCOUNT_RESTORE_FRAME
|
||||
ftrace_return:
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
GLOBAL(ftrace_graph_call)
|
||||
@ -98,8 +124,78 @@ GLOBAL(ftrace_stub)
|
||||
retq
|
||||
END(ftrace_caller)
|
||||
|
||||
ENTRY(ftrace_regs_caller)
|
||||
/* Save the current flags before compare (in SS location)*/
|
||||
pushfq
|
||||
|
||||
/* Check if tracing was disabled (quick check) */
|
||||
cmpl $0, function_trace_stop
|
||||
jne ftrace_restore_flags
|
||||
|
||||
/* skip=8 to skip flags saved in SS */
|
||||
ftrace_caller_setup 8
|
||||
|
||||
/* Save the rest of pt_regs */
|
||||
movq %r15, R15(%rsp)
|
||||
movq %r14, R14(%rsp)
|
||||
movq %r13, R13(%rsp)
|
||||
movq %r12, R12(%rsp)
|
||||
movq %r11, R11(%rsp)
|
||||
movq %r10, R10(%rsp)
|
||||
movq %rbp, RBP(%rsp)
|
||||
movq %rbx, RBX(%rsp)
|
||||
/* Copy saved flags */
|
||||
movq SS(%rsp), %rcx
|
||||
movq %rcx, EFLAGS(%rsp)
|
||||
/* Kernel segments */
|
||||
movq $__KERNEL_DS, %rcx
|
||||
movq %rcx, SS(%rsp)
|
||||
movq $__KERNEL_CS, %rcx
|
||||
movq %rcx, CS(%rsp)
|
||||
/* Stack - skipping return address */
|
||||
leaq SS+16(%rsp), %rcx
|
||||
movq %rcx, RSP(%rsp)
|
||||
|
||||
/* regs go into 4th parameter */
|
||||
leaq (%rsp), %rcx
|
||||
|
||||
GLOBAL(ftrace_regs_call)
|
||||
call ftrace_stub
|
||||
|
||||
/* Copy flags back to SS, to restore them */
|
||||
movq EFLAGS(%rsp), %rax
|
||||
movq %rax, SS(%rsp)
|
||||
|
||||
/* Handlers can change the RIP */
|
||||
movq RIP(%rsp), %rax
|
||||
movq %rax, SS+8(%rsp)
|
||||
|
||||
/* restore the rest of pt_regs */
|
||||
movq R15(%rsp), %r15
|
||||
movq R14(%rsp), %r14
|
||||
movq R13(%rsp), %r13
|
||||
movq R12(%rsp), %r12
|
||||
movq R10(%rsp), %r10
|
||||
movq RBP(%rsp), %rbp
|
||||
movq RBX(%rsp), %rbx
|
||||
|
||||
/* skip=8 to skip flags saved in SS */
|
||||
MCOUNT_RESTORE_FRAME 8
|
||||
|
||||
/* Restore flags */
|
||||
popfq
|
||||
|
||||
jmp ftrace_return
|
||||
ftrace_restore_flags:
|
||||
popfq
|
||||
jmp ftrace_stub
|
||||
|
||||
END(ftrace_regs_caller)
|
||||
|
||||
|
||||
#else /* ! CONFIG_DYNAMIC_FTRACE */
|
||||
ENTRY(mcount)
|
||||
|
||||
ENTRY(function_hook)
|
||||
cmpl $0, function_trace_stop
|
||||
jne ftrace_stub
|
||||
|
||||
@ -120,8 +216,12 @@ GLOBAL(ftrace_stub)
|
||||
trace:
|
||||
MCOUNT_SAVE_FRAME
|
||||
|
||||
movq 0x38(%rsp), %rdi
|
||||
movq RIP(%rsp), %rdi
|
||||
#ifdef CC_USING_FENTRY
|
||||
movq SS+16(%rsp), %rsi
|
||||
#else
|
||||
movq 8(%rbp), %rsi
|
||||
#endif
|
||||
subq $MCOUNT_INSN_SIZE, %rdi
|
||||
|
||||
call *ftrace_trace_function
|
||||
@ -129,20 +229,22 @@ trace:
|
||||
MCOUNT_RESTORE_FRAME
|
||||
|
||||
jmp ftrace_stub
|
||||
END(mcount)
|
||||
END(function_hook)
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE */
|
||||
#endif /* CONFIG_FUNCTION_TRACER */
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
ENTRY(ftrace_graph_caller)
|
||||
cmpl $0, function_trace_stop
|
||||
jne ftrace_stub
|
||||
|
||||
MCOUNT_SAVE_FRAME
|
||||
|
||||
#ifdef CC_USING_FENTRY
|
||||
leaq SS+16(%rsp), %rdi
|
||||
movq $0, %rdx /* No framepointers needed */
|
||||
#else
|
||||
leaq 8(%rbp), %rdi
|
||||
movq 0x38(%rsp), %rsi
|
||||
movq (%rbp), %rdx
|
||||
#endif
|
||||
movq RIP(%rsp), %rsi
|
||||
subq $MCOUNT_INSN_SIZE, %rsi
|
||||
|
||||
call prepare_ftrace_return
|
||||
|
@ -206,6 +206,21 @@ static int
|
||||
ftrace_modify_code(unsigned long ip, unsigned const char *old_code,
|
||||
unsigned const char *new_code);
|
||||
|
||||
/*
|
||||
* Should never be called:
|
||||
* As it is only called by __ftrace_replace_code() which is called by
|
||||
* ftrace_replace_code() that x86 overrides, and by ftrace_update_code()
|
||||
* which is called to turn mcount into nops or nops into function calls
|
||||
* but not to convert a function from not using regs to one that uses
|
||||
* regs, which ftrace_modify_call() is for.
|
||||
*/
|
||||
int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
|
||||
unsigned long addr)
|
||||
{
|
||||
WARN_ON(1);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
int ftrace_update_ftrace_func(ftrace_func_t func)
|
||||
{
|
||||
unsigned long ip = (unsigned long)(&ftrace_call);
|
||||
@ -220,6 +235,14 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
|
||||
|
||||
ret = ftrace_modify_code(ip, old, new);
|
||||
|
||||
/* Also update the regs callback function */
|
||||
if (!ret) {
|
||||
ip = (unsigned long)(&ftrace_regs_call);
|
||||
memcpy(old, &ftrace_regs_call, MCOUNT_INSN_SIZE);
|
||||
new = ftrace_call_replace(ip, (unsigned long)func);
|
||||
ret = ftrace_modify_code(ip, old, new);
|
||||
}
|
||||
|
||||
atomic_dec(&modifying_ftrace_code);
|
||||
|
||||
return ret;
|
||||
@ -299,6 +322,32 @@ static int add_brk_on_nop(struct dyn_ftrace *rec)
|
||||
return add_break(rec->ip, old);
|
||||
}
|
||||
|
||||
/*
|
||||
* If the record has the FTRACE_FL_REGS set, that means that it
|
||||
* wants to convert to a callback that saves all regs. If FTRACE_FL_REGS
|
||||
* is not not set, then it wants to convert to the normal callback.
|
||||
*/
|
||||
static unsigned long get_ftrace_addr(struct dyn_ftrace *rec)
|
||||
{
|
||||
if (rec->flags & FTRACE_FL_REGS)
|
||||
return (unsigned long)FTRACE_REGS_ADDR;
|
||||
else
|
||||
return (unsigned long)FTRACE_ADDR;
|
||||
}
|
||||
|
||||
/*
|
||||
* The FTRACE_FL_REGS_EN is set when the record already points to
|
||||
* a function that saves all the regs. Basically the '_EN' version
|
||||
* represents the current state of the function.
|
||||
*/
|
||||
static unsigned long get_ftrace_old_addr(struct dyn_ftrace *rec)
|
||||
{
|
||||
if (rec->flags & FTRACE_FL_REGS_EN)
|
||||
return (unsigned long)FTRACE_REGS_ADDR;
|
||||
else
|
||||
return (unsigned long)FTRACE_ADDR;
|
||||
}
|
||||
|
||||
static int add_breakpoints(struct dyn_ftrace *rec, int enable)
|
||||
{
|
||||
unsigned long ftrace_addr;
|
||||
@ -306,7 +355,7 @@ static int add_breakpoints(struct dyn_ftrace *rec, int enable)
|
||||
|
||||
ret = ftrace_test_record(rec, enable);
|
||||
|
||||
ftrace_addr = (unsigned long)FTRACE_ADDR;
|
||||
ftrace_addr = get_ftrace_addr(rec);
|
||||
|
||||
switch (ret) {
|
||||
case FTRACE_UPDATE_IGNORE:
|
||||
@ -316,6 +365,10 @@ static int add_breakpoints(struct dyn_ftrace *rec, int enable)
|
||||
/* converting nop to call */
|
||||
return add_brk_on_nop(rec);
|
||||
|
||||
case FTRACE_UPDATE_MODIFY_CALL_REGS:
|
||||
case FTRACE_UPDATE_MODIFY_CALL:
|
||||
ftrace_addr = get_ftrace_old_addr(rec);
|
||||
/* fall through */
|
||||
case FTRACE_UPDATE_MAKE_NOP:
|
||||
/* converting a call to a nop */
|
||||
return add_brk_on_call(rec, ftrace_addr);
|
||||
@ -360,13 +413,21 @@ static int remove_breakpoint(struct dyn_ftrace *rec)
|
||||
* If not, don't touch the breakpoint, we make just create
|
||||
* a disaster.
|
||||
*/
|
||||
ftrace_addr = (unsigned long)FTRACE_ADDR;
|
||||
ftrace_addr = get_ftrace_addr(rec);
|
||||
nop = ftrace_call_replace(ip, ftrace_addr);
|
||||
|
||||
if (memcmp(&ins[1], &nop[1], MCOUNT_INSN_SIZE - 1) == 0)
|
||||
goto update;
|
||||
|
||||
/* Check both ftrace_addr and ftrace_old_addr */
|
||||
ftrace_addr = get_ftrace_old_addr(rec);
|
||||
nop = ftrace_call_replace(ip, ftrace_addr);
|
||||
|
||||
if (memcmp(&ins[1], &nop[1], MCOUNT_INSN_SIZE - 1) != 0)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
update:
|
||||
return probe_kernel_write((void *)ip, &nop[0], 1);
|
||||
}
|
||||
|
||||
@ -405,12 +466,14 @@ static int add_update(struct dyn_ftrace *rec, int enable)
|
||||
|
||||
ret = ftrace_test_record(rec, enable);
|
||||
|
||||
ftrace_addr = (unsigned long)FTRACE_ADDR;
|
||||
ftrace_addr = get_ftrace_addr(rec);
|
||||
|
||||
switch (ret) {
|
||||
case FTRACE_UPDATE_IGNORE:
|
||||
return 0;
|
||||
|
||||
case FTRACE_UPDATE_MODIFY_CALL_REGS:
|
||||
case FTRACE_UPDATE_MODIFY_CALL:
|
||||
case FTRACE_UPDATE_MAKE_CALL:
|
||||
/* converting nop to call */
|
||||
return add_update_call(rec, ftrace_addr);
|
||||
@ -455,12 +518,14 @@ static int finish_update(struct dyn_ftrace *rec, int enable)
|
||||
|
||||
ret = ftrace_update_record(rec, enable);
|
||||
|
||||
ftrace_addr = (unsigned long)FTRACE_ADDR;
|
||||
ftrace_addr = get_ftrace_addr(rec);
|
||||
|
||||
switch (ret) {
|
||||
case FTRACE_UPDATE_IGNORE:
|
||||
return 0;
|
||||
|
||||
case FTRACE_UPDATE_MODIFY_CALL_REGS:
|
||||
case FTRACE_UPDATE_MODIFY_CALL:
|
||||
case FTRACE_UPDATE_MAKE_CALL:
|
||||
/* converting nop to call */
|
||||
return finish_update_call(rec, ftrace_addr);
|
||||
|
@ -541,6 +541,23 @@ reenter_kprobe(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb
|
||||
return 1;
|
||||
}
|
||||
|
||||
#ifdef KPROBES_CAN_USE_FTRACE
|
||||
static void __kprobes skip_singlestep(struct kprobe *p, struct pt_regs *regs,
|
||||
struct kprobe_ctlblk *kcb)
|
||||
{
|
||||
/*
|
||||
* Emulate singlestep (and also recover regs->ip)
|
||||
* as if there is a 5byte nop
|
||||
*/
|
||||
regs->ip = (unsigned long)p->addr + MCOUNT_INSN_SIZE;
|
||||
if (unlikely(p->post_handler)) {
|
||||
kcb->kprobe_status = KPROBE_HIT_SSDONE;
|
||||
p->post_handler(p, regs, 0);
|
||||
}
|
||||
__this_cpu_write(current_kprobe, NULL);
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Interrupts are disabled on entry as trap3 is an interrupt gate and they
|
||||
* remain disabled throughout this function.
|
||||
@ -599,6 +616,12 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
|
||||
} else if (kprobe_running()) {
|
||||
p = __this_cpu_read(current_kprobe);
|
||||
if (p->break_handler && p->break_handler(p, regs)) {
|
||||
#ifdef KPROBES_CAN_USE_FTRACE
|
||||
if (kprobe_ftrace(p)) {
|
||||
skip_singlestep(p, regs, kcb);
|
||||
return 1;
|
||||
}
|
||||
#endif
|
||||
setup_singlestep(p, regs, kcb, 0);
|
||||
return 1;
|
||||
}
|
||||
@ -1052,6 +1075,50 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef KPROBES_CAN_USE_FTRACE
|
||||
/* Ftrace callback handler for kprobes */
|
||||
void __kprobes kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *ops, struct pt_regs *regs)
|
||||
{
|
||||
struct kprobe *p;
|
||||
struct kprobe_ctlblk *kcb;
|
||||
unsigned long flags;
|
||||
|
||||
/* Disable irq for emulating a breakpoint and avoiding preempt */
|
||||
local_irq_save(flags);
|
||||
|
||||
p = get_kprobe((kprobe_opcode_t *)ip);
|
||||
if (unlikely(!p) || kprobe_disabled(p))
|
||||
goto end;
|
||||
|
||||
kcb = get_kprobe_ctlblk();
|
||||
if (kprobe_running()) {
|
||||
kprobes_inc_nmissed_count(p);
|
||||
} else {
|
||||
/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
|
||||
regs->ip = ip + sizeof(kprobe_opcode_t);
|
||||
|
||||
__this_cpu_write(current_kprobe, p);
|
||||
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
|
||||
if (!p->pre_handler || !p->pre_handler(p, regs))
|
||||
skip_singlestep(p, regs, kcb);
|
||||
/*
|
||||
* If pre_handler returns !0, it sets regs->ip and
|
||||
* resets current kprobe.
|
||||
*/
|
||||
}
|
||||
end:
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
|
||||
int __kprobes arch_prepare_kprobe_ftrace(struct kprobe *p)
|
||||
{
|
||||
p->ainsn.insn = NULL;
|
||||
p->ainsn.boostable = -1;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
int __init arch_init_kprobes(void)
|
||||
{
|
||||
return arch_init_optprobes();
|
||||
|
105
arch/x86/kernel/perf_regs.c
Normal file
105
arch/x86/kernel/perf_regs.c
Normal file
@ -0,0 +1,105 @@
|
||||
#include <linux/errno.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/bug.h>
|
||||
#include <linux/stddef.h>
|
||||
#include <asm/perf_regs.h>
|
||||
#include <asm/ptrace.h>
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
#define PERF_REG_X86_MAX PERF_REG_X86_32_MAX
|
||||
#else
|
||||
#define PERF_REG_X86_MAX PERF_REG_X86_64_MAX
|
||||
#endif
|
||||
|
||||
#define PT_REGS_OFFSET(id, r) [id] = offsetof(struct pt_regs, r)
|
||||
|
||||
static unsigned int pt_regs_offset[PERF_REG_X86_MAX] = {
|
||||
PT_REGS_OFFSET(PERF_REG_X86_AX, ax),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_BX, bx),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_CX, cx),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_DX, dx),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_SI, si),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_DI, di),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_BP, bp),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_SP, sp),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_IP, ip),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_FLAGS, flags),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_CS, cs),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_SS, ss),
|
||||
#ifdef CONFIG_X86_32
|
||||
PT_REGS_OFFSET(PERF_REG_X86_DS, ds),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_ES, es),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_FS, fs),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_GS, gs),
|
||||
#else
|
||||
/*
|
||||
* The pt_regs struct does not store
|
||||
* ds, es, fs, gs in 64 bit mode.
|
||||
*/
|
||||
(unsigned int) -1,
|
||||
(unsigned int) -1,
|
||||
(unsigned int) -1,
|
||||
(unsigned int) -1,
|
||||
#endif
|
||||
#ifdef CONFIG_X86_64
|
||||
PT_REGS_OFFSET(PERF_REG_X86_R8, r8),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_R9, r9),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_R10, r10),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_R11, r11),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_R12, r12),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_R13, r13),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_R14, r14),
|
||||
PT_REGS_OFFSET(PERF_REG_X86_R15, r15),
|
||||
#endif
|
||||
};
|
||||
|
||||
u64 perf_reg_value(struct pt_regs *regs, int idx)
|
||||
{
|
||||
if (WARN_ON_ONCE(idx >= ARRAY_SIZE(pt_regs_offset)))
|
||||
return 0;
|
||||
|
||||
return regs_get_register(regs, pt_regs_offset[idx]);
|
||||
}
|
||||
|
||||
#define REG_RESERVED (~((1ULL << PERF_REG_X86_MAX) - 1ULL))
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
int perf_reg_validate(u64 mask)
|
||||
{
|
||||
if (!mask || mask & REG_RESERVED)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
u64 perf_reg_abi(struct task_struct *task)
|
||||
{
|
||||
return PERF_SAMPLE_REGS_ABI_32;
|
||||
}
|
||||
#else /* CONFIG_X86_64 */
|
||||
#define REG_NOSUPPORT ((1ULL << PERF_REG_X86_DS) | \
|
||||
(1ULL << PERF_REG_X86_ES) | \
|
||||
(1ULL << PERF_REG_X86_FS) | \
|
||||
(1ULL << PERF_REG_X86_GS))
|
||||
|
||||
int perf_reg_validate(u64 mask)
|
||||
{
|
||||
if (!mask || mask & REG_RESERVED)
|
||||
return -EINVAL;
|
||||
|
||||
if (mask & REG_NOSUPPORT)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
u64 perf_reg_abi(struct task_struct *task)
|
||||
{
|
||||
if (test_tsk_thread_flag(task, TIF_IA32))
|
||||
return PERF_SAMPLE_REGS_ABI_32;
|
||||
else
|
||||
return PERF_SAMPLE_REGS_ABI_64;
|
||||
}
|
||||
#endif /* CONFIG_X86_32 */
|
@ -157,6 +157,33 @@ static int enable_single_step(struct task_struct *child)
|
||||
return 1;
|
||||
}
|
||||
|
||||
void set_task_blockstep(struct task_struct *task, bool on)
|
||||
{
|
||||
unsigned long debugctl;
|
||||
|
||||
/*
|
||||
* Ensure irq/preemption can't change debugctl in between.
|
||||
* Note also that both TIF_BLOCKSTEP and debugctl should
|
||||
* be changed atomically wrt preemption.
|
||||
* FIXME: this means that set/clear TIF_BLOCKSTEP is simply
|
||||
* wrong if task != current, SIGKILL can wakeup the stopped
|
||||
* tracee and set/clear can play with the running task, this
|
||||
* can confuse the next __switch_to_xtra().
|
||||
*/
|
||||
local_irq_disable();
|
||||
debugctl = get_debugctlmsr();
|
||||
if (on) {
|
||||
debugctl |= DEBUGCTLMSR_BTF;
|
||||
set_tsk_thread_flag(task, TIF_BLOCKSTEP);
|
||||
} else {
|
||||
debugctl &= ~DEBUGCTLMSR_BTF;
|
||||
clear_tsk_thread_flag(task, TIF_BLOCKSTEP);
|
||||
}
|
||||
if (task == current)
|
||||
update_debugctlmsr(debugctl);
|
||||
local_irq_enable();
|
||||
}
|
||||
|
||||
/*
|
||||
* Enable single or block step.
|
||||
*/
|
||||
@ -169,19 +196,10 @@ static void enable_step(struct task_struct *child, bool block)
|
||||
* So no one should try to use debugger block stepping in a program
|
||||
* that uses user-mode single stepping itself.
|
||||
*/
|
||||
if (enable_single_step(child) && block) {
|
||||
unsigned long debugctl = get_debugctlmsr();
|
||||
|
||||
debugctl |= DEBUGCTLMSR_BTF;
|
||||
update_debugctlmsr(debugctl);
|
||||
set_tsk_thread_flag(child, TIF_BLOCKSTEP);
|
||||
} else if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) {
|
||||
unsigned long debugctl = get_debugctlmsr();
|
||||
|
||||
debugctl &= ~DEBUGCTLMSR_BTF;
|
||||
update_debugctlmsr(debugctl);
|
||||
clear_tsk_thread_flag(child, TIF_BLOCKSTEP);
|
||||
}
|
||||
if (enable_single_step(child) && block)
|
||||
set_task_blockstep(child, true);
|
||||
else if (test_tsk_thread_flag(child, TIF_BLOCKSTEP))
|
||||
set_task_blockstep(child, false);
|
||||
}
|
||||
|
||||
void user_enable_single_step(struct task_struct *child)
|
||||
@ -199,13 +217,8 @@ void user_disable_single_step(struct task_struct *child)
|
||||
/*
|
||||
* Make sure block stepping (BTF) is disabled.
|
||||
*/
|
||||
if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) {
|
||||
unsigned long debugctl = get_debugctlmsr();
|
||||
|
||||
debugctl &= ~DEBUGCTLMSR_BTF;
|
||||
update_debugctlmsr(debugctl);
|
||||
clear_tsk_thread_flag(child, TIF_BLOCKSTEP);
|
||||
}
|
||||
if (test_tsk_thread_flag(child, TIF_BLOCKSTEP))
|
||||
set_task_blockstep(child, false);
|
||||
|
||||
/* Always clear TIF_SINGLESTEP... */
|
||||
clear_tsk_thread_flag(child, TIF_SINGLESTEP);
|
||||
|
@ -41,6 +41,9 @@
|
||||
/* Adjust the return address of a call insn */
|
||||
#define UPROBE_FIX_CALL 0x2
|
||||
|
||||
/* Instruction will modify TF, don't change it */
|
||||
#define UPROBE_FIX_SETF 0x4
|
||||
|
||||
#define UPROBE_FIX_RIP_AX 0x8000
|
||||
#define UPROBE_FIX_RIP_CX 0x4000
|
||||
|
||||
@ -239,6 +242,10 @@ static void prepare_fixups(struct arch_uprobe *auprobe, struct insn *insn)
|
||||
insn_get_opcode(insn); /* should be a nop */
|
||||
|
||||
switch (OPCODE1(insn)) {
|
||||
case 0x9d:
|
||||
/* popf */
|
||||
auprobe->fixups |= UPROBE_FIX_SETF;
|
||||
break;
|
||||
case 0xc3: /* ret/lret */
|
||||
case 0xcb:
|
||||
case 0xc2:
|
||||
@ -646,7 +653,7 @@ void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
|
||||
* Skip these instructions as per the currently known x86 ISA.
|
||||
* 0x66* { 0x90 | 0x0f 0x1f | 0x0f 0x19 | 0x87 0xc0 }
|
||||
*/
|
||||
bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
|
||||
static bool __skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
|
||||
{
|
||||
int i;
|
||||
|
||||
@ -673,3 +680,46 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
|
||||
{
|
||||
bool ret = __skip_sstep(auprobe, regs);
|
||||
if (ret && (regs->flags & X86_EFLAGS_TF))
|
||||
send_sig(SIGTRAP, current, 0);
|
||||
return ret;
|
||||
}
|
||||
|
||||
void arch_uprobe_enable_step(struct arch_uprobe *auprobe)
|
||||
{
|
||||
struct task_struct *task = current;
|
||||
struct arch_uprobe_task *autask = &task->utask->autask;
|
||||
struct pt_regs *regs = task_pt_regs(task);
|
||||
|
||||
autask->saved_tf = !!(regs->flags & X86_EFLAGS_TF);
|
||||
|
||||
regs->flags |= X86_EFLAGS_TF;
|
||||
if (test_tsk_thread_flag(task, TIF_BLOCKSTEP))
|
||||
set_task_blockstep(task, false);
|
||||
}
|
||||
|
||||
void arch_uprobe_disable_step(struct arch_uprobe *auprobe)
|
||||
{
|
||||
struct task_struct *task = current;
|
||||
struct arch_uprobe_task *autask = &task->utask->autask;
|
||||
bool trapped = (task->utask->state == UTASK_SSTEP_TRAPPED);
|
||||
struct pt_regs *regs = task_pt_regs(task);
|
||||
/*
|
||||
* The state of TIF_BLOCKSTEP was not saved so we can get an extra
|
||||
* SIGTRAP if we do not clear TF. We need to examine the opcode to
|
||||
* make it right.
|
||||
*/
|
||||
if (unlikely(trapped)) {
|
||||
if (!autask->saved_tf)
|
||||
regs->flags &= ~X86_EFLAGS_TF;
|
||||
} else {
|
||||
if (autask->saved_tf)
|
||||
send_sig(SIGTRAP, task, 0);
|
||||
else if (!(auprobe->fixups & UPROBE_FIX_SETF))
|
||||
regs->flags &= ~X86_EFLAGS_TF;
|
||||
}
|
||||
}
|
||||
|
@ -13,9 +13,13 @@
|
||||
#include <asm/ftrace.h>
|
||||
|
||||
#ifdef CONFIG_FUNCTION_TRACER
|
||||
/* mcount is defined in assembly */
|
||||
/* mcount and __fentry__ are defined in assembly */
|
||||
#ifdef CC_USING_FENTRY
|
||||
EXPORT_SYMBOL(__fentry__);
|
||||
#else
|
||||
EXPORT_SYMBOL(mcount);
|
||||
#endif
|
||||
#endif
|
||||
|
||||
EXPORT_SYMBOL(__get_user_1);
|
||||
EXPORT_SYMBOL(__get_user_2);
|
||||
|
@ -183,95 +183,6 @@ TRACE_EVENT(kvm_apic,
|
||||
#define KVM_ISA_VMX 1
|
||||
#define KVM_ISA_SVM 2
|
||||
|
||||
#define VMX_EXIT_REASONS \
|
||||
{ EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \
|
||||
{ EXIT_REASON_EXTERNAL_INTERRUPT, "EXTERNAL_INTERRUPT" }, \
|
||||
{ EXIT_REASON_TRIPLE_FAULT, "TRIPLE_FAULT" }, \
|
||||
{ EXIT_REASON_PENDING_INTERRUPT, "PENDING_INTERRUPT" }, \
|
||||
{ EXIT_REASON_NMI_WINDOW, "NMI_WINDOW" }, \
|
||||
{ EXIT_REASON_TASK_SWITCH, "TASK_SWITCH" }, \
|
||||
{ EXIT_REASON_CPUID, "CPUID" }, \
|
||||
{ EXIT_REASON_HLT, "HLT" }, \
|
||||
{ EXIT_REASON_INVLPG, "INVLPG" }, \
|
||||
{ EXIT_REASON_RDPMC, "RDPMC" }, \
|
||||
{ EXIT_REASON_RDTSC, "RDTSC" }, \
|
||||
{ EXIT_REASON_VMCALL, "VMCALL" }, \
|
||||
{ EXIT_REASON_VMCLEAR, "VMCLEAR" }, \
|
||||
{ EXIT_REASON_VMLAUNCH, "VMLAUNCH" }, \
|
||||
{ EXIT_REASON_VMPTRLD, "VMPTRLD" }, \
|
||||
{ EXIT_REASON_VMPTRST, "VMPTRST" }, \
|
||||
{ EXIT_REASON_VMREAD, "VMREAD" }, \
|
||||
{ EXIT_REASON_VMRESUME, "VMRESUME" }, \
|
||||
{ EXIT_REASON_VMWRITE, "VMWRITE" }, \
|
||||
{ EXIT_REASON_VMOFF, "VMOFF" }, \
|
||||
{ EXIT_REASON_VMON, "VMON" }, \
|
||||
{ EXIT_REASON_CR_ACCESS, "CR_ACCESS" }, \
|
||||
{ EXIT_REASON_DR_ACCESS, "DR_ACCESS" }, \
|
||||
{ EXIT_REASON_IO_INSTRUCTION, "IO_INSTRUCTION" }, \
|
||||
{ EXIT_REASON_MSR_READ, "MSR_READ" }, \
|
||||
{ EXIT_REASON_MSR_WRITE, "MSR_WRITE" }, \
|
||||
{ EXIT_REASON_MWAIT_INSTRUCTION, "MWAIT_INSTRUCTION" }, \
|
||||
{ EXIT_REASON_MONITOR_INSTRUCTION, "MONITOR_INSTRUCTION" }, \
|
||||
{ EXIT_REASON_PAUSE_INSTRUCTION, "PAUSE_INSTRUCTION" }, \
|
||||
{ EXIT_REASON_MCE_DURING_VMENTRY, "MCE_DURING_VMENTRY" }, \
|
||||
{ EXIT_REASON_TPR_BELOW_THRESHOLD, "TPR_BELOW_THRESHOLD" }, \
|
||||
{ EXIT_REASON_APIC_ACCESS, "APIC_ACCESS" }, \
|
||||
{ EXIT_REASON_EPT_VIOLATION, "EPT_VIOLATION" }, \
|
||||
{ EXIT_REASON_EPT_MISCONFIG, "EPT_MISCONFIG" }, \
|
||||
{ EXIT_REASON_WBINVD, "WBINVD" }
|
||||
|
||||
#define SVM_EXIT_REASONS \
|
||||
{ SVM_EXIT_READ_CR0, "read_cr0" }, \
|
||||
{ SVM_EXIT_READ_CR3, "read_cr3" }, \
|
||||
{ SVM_EXIT_READ_CR4, "read_cr4" }, \
|
||||
{ SVM_EXIT_READ_CR8, "read_cr8" }, \
|
||||
{ SVM_EXIT_WRITE_CR0, "write_cr0" }, \
|
||||
{ SVM_EXIT_WRITE_CR3, "write_cr3" }, \
|
||||
{ SVM_EXIT_WRITE_CR4, "write_cr4" }, \
|
||||
{ SVM_EXIT_WRITE_CR8, "write_cr8" }, \
|
||||
{ SVM_EXIT_READ_DR0, "read_dr0" }, \
|
||||
{ SVM_EXIT_READ_DR1, "read_dr1" }, \
|
||||
{ SVM_EXIT_READ_DR2, "read_dr2" }, \
|
||||
{ SVM_EXIT_READ_DR3, "read_dr3" }, \
|
||||
{ SVM_EXIT_WRITE_DR0, "write_dr0" }, \
|
||||
{ SVM_EXIT_WRITE_DR1, "write_dr1" }, \
|
||||
{ SVM_EXIT_WRITE_DR2, "write_dr2" }, \
|
||||
{ SVM_EXIT_WRITE_DR3, "write_dr3" }, \
|
||||
{ SVM_EXIT_WRITE_DR5, "write_dr5" }, \
|
||||
{ SVM_EXIT_WRITE_DR7, "write_dr7" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + DB_VECTOR, "DB excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + BP_VECTOR, "BP excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + UD_VECTOR, "UD excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + PF_VECTOR, "PF excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + NM_VECTOR, "NM excp" }, \
|
||||
{ SVM_EXIT_EXCP_BASE + MC_VECTOR, "MC excp" }, \
|
||||
{ SVM_EXIT_INTR, "interrupt" }, \
|
||||
{ SVM_EXIT_NMI, "nmi" }, \
|
||||
{ SVM_EXIT_SMI, "smi" }, \
|
||||
{ SVM_EXIT_INIT, "init" }, \
|
||||
{ SVM_EXIT_VINTR, "vintr" }, \
|
||||
{ SVM_EXIT_CPUID, "cpuid" }, \
|
||||
{ SVM_EXIT_INVD, "invd" }, \
|
||||
{ SVM_EXIT_HLT, "hlt" }, \
|
||||
{ SVM_EXIT_INVLPG, "invlpg" }, \
|
||||
{ SVM_EXIT_INVLPGA, "invlpga" }, \
|
||||
{ SVM_EXIT_IOIO, "io" }, \
|
||||
{ SVM_EXIT_MSR, "msr" }, \
|
||||
{ SVM_EXIT_TASK_SWITCH, "task_switch" }, \
|
||||
{ SVM_EXIT_SHUTDOWN, "shutdown" }, \
|
||||
{ SVM_EXIT_VMRUN, "vmrun" }, \
|
||||
{ SVM_EXIT_VMMCALL, "hypercall" }, \
|
||||
{ SVM_EXIT_VMLOAD, "vmload" }, \
|
||||
{ SVM_EXIT_VMSAVE, "vmsave" }, \
|
||||
{ SVM_EXIT_STGI, "stgi" }, \
|
||||
{ SVM_EXIT_CLGI, "clgi" }, \
|
||||
{ SVM_EXIT_SKINIT, "skinit" }, \
|
||||
{ SVM_EXIT_WBINVD, "wbinvd" }, \
|
||||
{ SVM_EXIT_MONITOR, "monitor" }, \
|
||||
{ SVM_EXIT_MWAIT, "mwait" }, \
|
||||
{ SVM_EXIT_XSETBV, "xsetbv" }, \
|
||||
{ SVM_EXIT_NPF, "npf" }
|
||||
|
||||
/*
|
||||
* Tracepoint for kvm guest exit:
|
||||
*/
|
||||
|
@ -451,14 +451,9 @@ static void wq_sync_buffer(struct work_struct *work)
|
||||
{
|
||||
struct oprofile_cpu_buffer *b =
|
||||
container_of(work, struct oprofile_cpu_buffer, work.work);
|
||||
if (b->cpu != smp_processor_id()) {
|
||||
printk(KERN_DEBUG "WQ on CPU%d, prefer CPU%d\n",
|
||||
smp_processor_id(), b->cpu);
|
||||
|
||||
if (!cpu_online(b->cpu)) {
|
||||
cancel_delayed_work(&b->work);
|
||||
return;
|
||||
}
|
||||
if (b->cpu != smp_processor_id() && !cpu_online(b->cpu)) {
|
||||
cancel_delayed_work(&b->work);
|
||||
return;
|
||||
}
|
||||
sync_buffer(b->cpu);
|
||||
|
||||
|
@ -10,6 +10,7 @@
|
||||
#include <linux/kallsyms.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/ptrace.h>
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/types.h>
|
||||
@ -18,6 +19,28 @@
|
||||
|
||||
#include <asm/ftrace.h>
|
||||
|
||||
/*
|
||||
* If the arch supports passing the variable contents of
|
||||
* function_trace_op as the third parameter back from the
|
||||
* mcount call, then the arch should define this as 1.
|
||||
*/
|
||||
#ifndef ARCH_SUPPORTS_FTRACE_OPS
|
||||
#define ARCH_SUPPORTS_FTRACE_OPS 0
|
||||
#endif
|
||||
|
||||
/*
|
||||
* If the arch's mcount caller does not support all of ftrace's
|
||||
* features, then it must call an indirect function that
|
||||
* does. Or at least does enough to prevent any unwelcomed side effects.
|
||||
*/
|
||||
#if !defined(CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST) || \
|
||||
!ARCH_SUPPORTS_FTRACE_OPS
|
||||
# define FTRACE_FORCE_LIST_FUNC 1
|
||||
#else
|
||||
# define FTRACE_FORCE_LIST_FUNC 0
|
||||
#endif
|
||||
|
||||
|
||||
struct module;
|
||||
struct ftrace_hash;
|
||||
|
||||
@ -29,7 +52,10 @@ ftrace_enable_sysctl(struct ctl_table *table, int write,
|
||||
void __user *buffer, size_t *lenp,
|
||||
loff_t *ppos);
|
||||
|
||||
typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip);
|
||||
struct ftrace_ops;
|
||||
|
||||
typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *regs);
|
||||
|
||||
/*
|
||||
* FTRACE_OPS_FL_* bits denote the state of ftrace_ops struct and are
|
||||
@ -45,12 +71,33 @@ typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip);
|
||||
* could be controled by following calls:
|
||||
* ftrace_function_local_enable
|
||||
* ftrace_function_local_disable
|
||||
* SAVE_REGS - The ftrace_ops wants regs saved at each function called
|
||||
* and passed to the callback. If this flag is set, but the
|
||||
* architecture does not support passing regs
|
||||
* (ARCH_SUPPORTS_FTRACE_SAVE_REGS is not defined), then the
|
||||
* ftrace_ops will fail to register, unless the next flag
|
||||
* is set.
|
||||
* SAVE_REGS_IF_SUPPORTED - This is the same as SAVE_REGS, but if the
|
||||
* handler can handle an arch that does not save regs
|
||||
* (the handler tests if regs == NULL), then it can set
|
||||
* this flag instead. It will not fail registering the ftrace_ops
|
||||
* but, the regs field will be NULL if the arch does not support
|
||||
* passing regs to the handler.
|
||||
* Note, if this flag is set, the SAVE_REGS flag will automatically
|
||||
* get set upon registering the ftrace_ops, if the arch supports it.
|
||||
* RECURSION_SAFE - The ftrace_ops can set this to tell the ftrace infrastructure
|
||||
* that the call back has its own recursion protection. If it does
|
||||
* not set this, then the ftrace infrastructure will add recursion
|
||||
* protection for the caller.
|
||||
*/
|
||||
enum {
|
||||
FTRACE_OPS_FL_ENABLED = 1 << 0,
|
||||
FTRACE_OPS_FL_GLOBAL = 1 << 1,
|
||||
FTRACE_OPS_FL_DYNAMIC = 1 << 2,
|
||||
FTRACE_OPS_FL_CONTROL = 1 << 3,
|
||||
FTRACE_OPS_FL_ENABLED = 1 << 0,
|
||||
FTRACE_OPS_FL_GLOBAL = 1 << 1,
|
||||
FTRACE_OPS_FL_DYNAMIC = 1 << 2,
|
||||
FTRACE_OPS_FL_CONTROL = 1 << 3,
|
||||
FTRACE_OPS_FL_SAVE_REGS = 1 << 4,
|
||||
FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED = 1 << 5,
|
||||
FTRACE_OPS_FL_RECURSION_SAFE = 1 << 6,
|
||||
};
|
||||
|
||||
struct ftrace_ops {
|
||||
@ -163,7 +210,8 @@ static inline int ftrace_function_local_disabled(struct ftrace_ops *ops)
|
||||
return *this_cpu_ptr(ops->disabled);
|
||||
}
|
||||
|
||||
extern void ftrace_stub(unsigned long a0, unsigned long a1);
|
||||
extern void ftrace_stub(unsigned long a0, unsigned long a1,
|
||||
struct ftrace_ops *op, struct pt_regs *regs);
|
||||
|
||||
#else /* !CONFIG_FUNCTION_TRACER */
|
||||
/*
|
||||
@ -172,6 +220,10 @@ extern void ftrace_stub(unsigned long a0, unsigned long a1);
|
||||
*/
|
||||
#define register_ftrace_function(ops) ({ 0; })
|
||||
#define unregister_ftrace_function(ops) ({ 0; })
|
||||
static inline int ftrace_nr_registered_ops(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void clear_ftrace_function(void) { }
|
||||
static inline void ftrace_kill(void) { }
|
||||
static inline void ftrace_stop(void) { }
|
||||
@ -227,12 +279,33 @@ extern void unregister_ftrace_function_probe_all(char *glob);
|
||||
|
||||
extern int ftrace_text_reserved(void *start, void *end);
|
||||
|
||||
extern int ftrace_nr_registered_ops(void);
|
||||
|
||||
/*
|
||||
* The dyn_ftrace record's flags field is split into two parts.
|
||||
* the first part which is '0-FTRACE_REF_MAX' is a counter of
|
||||
* the number of callbacks that have registered the function that
|
||||
* the dyn_ftrace descriptor represents.
|
||||
*
|
||||
* The second part is a mask:
|
||||
* ENABLED - the function is being traced
|
||||
* REGS - the record wants the function to save regs
|
||||
* REGS_EN - the function is set up to save regs.
|
||||
*
|
||||
* When a new ftrace_ops is registered and wants a function to save
|
||||
* pt_regs, the rec->flag REGS is set. When the function has been
|
||||
* set up to save regs, the REG_EN flag is set. Once a function
|
||||
* starts saving regs it will do so until all ftrace_ops are removed
|
||||
* from tracing that function.
|
||||
*/
|
||||
enum {
|
||||
FTRACE_FL_ENABLED = (1 << 30),
|
||||
FTRACE_FL_ENABLED = (1UL << 29),
|
||||
FTRACE_FL_REGS = (1UL << 30),
|
||||
FTRACE_FL_REGS_EN = (1UL << 31)
|
||||
};
|
||||
|
||||
#define FTRACE_FL_MASK (0x3UL << 30)
|
||||
#define FTRACE_REF_MAX ((1 << 30) - 1)
|
||||
#define FTRACE_FL_MASK (0x7UL << 29)
|
||||
#define FTRACE_REF_MAX ((1UL << 29) - 1)
|
||||
|
||||
struct dyn_ftrace {
|
||||
union {
|
||||
@ -244,6 +317,8 @@ struct dyn_ftrace {
|
||||
};
|
||||
|
||||
int ftrace_force_update(void);
|
||||
int ftrace_set_filter_ip(struct ftrace_ops *ops, unsigned long ip,
|
||||
int remove, int reset);
|
||||
int ftrace_set_filter(struct ftrace_ops *ops, unsigned char *buf,
|
||||
int len, int reset);
|
||||
int ftrace_set_notrace(struct ftrace_ops *ops, unsigned char *buf,
|
||||
@ -263,9 +338,23 @@ enum {
|
||||
FTRACE_STOP_FUNC_RET = (1 << 4),
|
||||
};
|
||||
|
||||
/*
|
||||
* The FTRACE_UPDATE_* enum is used to pass information back
|
||||
* from the ftrace_update_record() and ftrace_test_record()
|
||||
* functions. These are called by the code update routines
|
||||
* to find out what is to be done for a given function.
|
||||
*
|
||||
* IGNORE - The function is already what we want it to be
|
||||
* MAKE_CALL - Start tracing the function
|
||||
* MODIFY_CALL - Stop saving regs for the function
|
||||
* MODIFY_CALL_REGS - Start saving regs for the function
|
||||
* MAKE_NOP - Stop tracing the function
|
||||
*/
|
||||
enum {
|
||||
FTRACE_UPDATE_IGNORE,
|
||||
FTRACE_UPDATE_MAKE_CALL,
|
||||
FTRACE_UPDATE_MODIFY_CALL,
|
||||
FTRACE_UPDATE_MODIFY_CALL_REGS,
|
||||
FTRACE_UPDATE_MAKE_NOP,
|
||||
};
|
||||
|
||||
@ -317,7 +406,9 @@ extern int ftrace_dyn_arch_init(void *data);
|
||||
extern void ftrace_replace_code(int enable);
|
||||
extern int ftrace_update_ftrace_func(ftrace_func_t func);
|
||||
extern void ftrace_caller(void);
|
||||
extern void ftrace_regs_caller(void);
|
||||
extern void ftrace_call(void);
|
||||
extern void ftrace_regs_call(void);
|
||||
extern void mcount_call(void);
|
||||
|
||||
void ftrace_modify_all_code(int command);
|
||||
@ -325,6 +416,15 @@ void ftrace_modify_all_code(int command);
|
||||
#ifndef FTRACE_ADDR
|
||||
#define FTRACE_ADDR ((unsigned long)ftrace_caller)
|
||||
#endif
|
||||
|
||||
#ifndef FTRACE_REGS_ADDR
|
||||
#ifdef ARCH_SUPPORTS_FTRACE_SAVE_REGS
|
||||
# define FTRACE_REGS_ADDR ((unsigned long)ftrace_regs_caller)
|
||||
#else
|
||||
# define FTRACE_REGS_ADDR FTRACE_ADDR
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
extern void ftrace_graph_caller(void);
|
||||
extern int ftrace_enable_ftrace_graph_caller(void);
|
||||
@ -380,6 +480,39 @@ extern int ftrace_make_nop(struct module *mod,
|
||||
*/
|
||||
extern int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr);
|
||||
|
||||
#ifdef ARCH_SUPPORTS_FTRACE_SAVE_REGS
|
||||
/**
|
||||
* ftrace_modify_call - convert from one addr to another (no nop)
|
||||
* @rec: the mcount call site record
|
||||
* @old_addr: the address expected to be currently called to
|
||||
* @addr: the address to change to
|
||||
*
|
||||
* This is a very sensitive operation and great care needs
|
||||
* to be taken by the arch. The operation should carefully
|
||||
* read the location, check to see if what is read is indeed
|
||||
* what we expect it to be, and then on success of the compare,
|
||||
* it should write to the location.
|
||||
*
|
||||
* The code segment at @rec->ip should be a caller to @old_addr
|
||||
*
|
||||
* Return must be:
|
||||
* 0 on success
|
||||
* -EFAULT on error reading the location
|
||||
* -EINVAL on a failed compare of the contents
|
||||
* -EPERM on error writing to the location
|
||||
* Any other value will be considered a failure.
|
||||
*/
|
||||
extern int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
|
||||
unsigned long addr);
|
||||
#else
|
||||
/* Should never be called */
|
||||
static inline int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
|
||||
unsigned long addr)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
#endif
|
||||
|
||||
/* May be defined in arch */
|
||||
extern int ftrace_arch_read_dyn_info(char *buf, int size);
|
||||
|
||||
@ -387,7 +520,7 @@ extern int skip_trace(unsigned long ip);
|
||||
|
||||
extern void ftrace_disable_daemon(void);
|
||||
extern void ftrace_enable_daemon(void);
|
||||
#else
|
||||
#else /* CONFIG_DYNAMIC_FTRACE */
|
||||
static inline int skip_trace(unsigned long ip) { return 0; }
|
||||
static inline int ftrace_force_update(void) { return 0; }
|
||||
static inline void ftrace_disable_daemon(void) { }
|
||||
@ -405,6 +538,10 @@ static inline int ftrace_text_reserved(void *start, void *end)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline unsigned long ftrace_location(unsigned long ip)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Again users of functions that have ftrace_ops may not
|
||||
@ -413,6 +550,7 @@ static inline int ftrace_text_reserved(void *start, void *end)
|
||||
*/
|
||||
#define ftrace_regex_open(ops, flag, inod, file) ({ -ENODEV; })
|
||||
#define ftrace_set_early_filter(ops, buf, enable) do { } while (0)
|
||||
#define ftrace_set_filter_ip(ops, ip, remove, reset) ({ -ENODEV; })
|
||||
#define ftrace_set_filter(ops, buf, len, reset) ({ -ENODEV; })
|
||||
#define ftrace_set_notrace(ops, buf, len, reset) ({ -ENODEV; })
|
||||
#define ftrace_free_filter(ops) do { } while (0)
|
||||
|
@ -38,6 +38,7 @@
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/rcupdate.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/ftrace.h>
|
||||
|
||||
#ifdef CONFIG_KPROBES
|
||||
#include <asm/kprobes.h>
|
||||
@ -48,14 +49,26 @@
|
||||
#define KPROBE_REENTER 0x00000004
|
||||
#define KPROBE_HIT_SSDONE 0x00000008
|
||||
|
||||
/*
|
||||
* If function tracer is enabled and the arch supports full
|
||||
* passing of pt_regs to function tracing, then kprobes can
|
||||
* optimize on top of function tracing.
|
||||
*/
|
||||
#if defined(CONFIG_FUNCTION_TRACER) && defined(ARCH_SUPPORTS_FTRACE_SAVE_REGS) \
|
||||
&& defined(ARCH_SUPPORTS_KPROBES_ON_FTRACE)
|
||||
# define KPROBES_CAN_USE_FTRACE
|
||||
#endif
|
||||
|
||||
/* Attach to insert probes on any functions which should be ignored*/
|
||||
#define __kprobes __attribute__((__section__(".kprobes.text")))
|
||||
|
||||
#else /* CONFIG_KPROBES */
|
||||
typedef int kprobe_opcode_t;
|
||||
struct arch_specific_insn {
|
||||
int dummy;
|
||||
};
|
||||
#define __kprobes
|
||||
|
||||
#endif /* CONFIG_KPROBES */
|
||||
|
||||
struct kprobe;
|
||||
@ -128,6 +141,7 @@ struct kprobe {
|
||||
* NOTE:
|
||||
* this flag is only for optimized_kprobe.
|
||||
*/
|
||||
#define KPROBE_FLAG_FTRACE 8 /* probe is using ftrace */
|
||||
|
||||
/* Has this kprobe gone ? */
|
||||
static inline int kprobe_gone(struct kprobe *p)
|
||||
@ -146,6 +160,13 @@ static inline int kprobe_optimized(struct kprobe *p)
|
||||
{
|
||||
return p->flags & KPROBE_FLAG_OPTIMIZED;
|
||||
}
|
||||
|
||||
/* Is this kprobe uses ftrace ? */
|
||||
static inline int kprobe_ftrace(struct kprobe *p)
|
||||
{
|
||||
return p->flags & KPROBE_FLAG_FTRACE;
|
||||
}
|
||||
|
||||
/*
|
||||
* Special probe type that uses setjmp-longjmp type tricks to resume
|
||||
* execution at a specified entry with a matching prototype corresponding
|
||||
@ -295,6 +316,12 @@ extern int proc_kprobes_optimization_handler(struct ctl_table *table,
|
||||
#endif
|
||||
|
||||
#endif /* CONFIG_OPTPROBES */
|
||||
#ifdef KPROBES_CAN_USE_FTRACE
|
||||
extern void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *ops, struct pt_regs *regs);
|
||||
extern int arch_prepare_kprobe_ftrace(struct kprobe *p);
|
||||
#endif
|
||||
|
||||
|
||||
/* Get the kprobe at this addr (if any) - called with preemption disabled */
|
||||
struct kprobe *get_kprobe(void *addr);
|
||||
|
@ -130,8 +130,10 @@ enum perf_event_sample_format {
|
||||
PERF_SAMPLE_STREAM_ID = 1U << 9,
|
||||
PERF_SAMPLE_RAW = 1U << 10,
|
||||
PERF_SAMPLE_BRANCH_STACK = 1U << 11,
|
||||
PERF_SAMPLE_REGS_USER = 1U << 12,
|
||||
PERF_SAMPLE_STACK_USER = 1U << 13,
|
||||
|
||||
PERF_SAMPLE_MAX = 1U << 12, /* non-ABI */
|
||||
PERF_SAMPLE_MAX = 1U << 14, /* non-ABI */
|
||||
};
|
||||
|
||||
/*
|
||||
@ -162,6 +164,15 @@ enum perf_branch_sample_type {
|
||||
PERF_SAMPLE_BRANCH_KERNEL|\
|
||||
PERF_SAMPLE_BRANCH_HV)
|
||||
|
||||
/*
|
||||
* Values to determine ABI of the registers dump.
|
||||
*/
|
||||
enum perf_sample_regs_abi {
|
||||
PERF_SAMPLE_REGS_ABI_NONE = 0,
|
||||
PERF_SAMPLE_REGS_ABI_32 = 1,
|
||||
PERF_SAMPLE_REGS_ABI_64 = 2,
|
||||
};
|
||||
|
||||
/*
|
||||
* The format of the data returned by read() on a perf event fd,
|
||||
* as specified by attr.read_format:
|
||||
@ -194,6 +205,8 @@ enum perf_event_read_format {
|
||||
#define PERF_ATTR_SIZE_VER0 64 /* sizeof first published struct */
|
||||
#define PERF_ATTR_SIZE_VER1 72 /* add: config2 */
|
||||
#define PERF_ATTR_SIZE_VER2 80 /* add: branch_sample_type */
|
||||
#define PERF_ATTR_SIZE_VER3 96 /* add: sample_regs_user */
|
||||
/* add: sample_stack_user */
|
||||
|
||||
/*
|
||||
* Hardware event_id to monitor via a performance monitoring event:
|
||||
@ -255,7 +268,10 @@ struct perf_event_attr {
|
||||
exclude_host : 1, /* don't count in host */
|
||||
exclude_guest : 1, /* don't count in guest */
|
||||
|
||||
__reserved_1 : 43;
|
||||
exclude_callchain_kernel : 1, /* exclude kernel callchains */
|
||||
exclude_callchain_user : 1, /* exclude user callchains */
|
||||
|
||||
__reserved_1 : 41;
|
||||
|
||||
union {
|
||||
__u32 wakeup_events; /* wakeup every n events */
|
||||
@ -271,7 +287,21 @@ struct perf_event_attr {
|
||||
__u64 bp_len;
|
||||
__u64 config2; /* extension of config1 */
|
||||
};
|
||||
__u64 branch_sample_type; /* enum branch_sample_type */
|
||||
__u64 branch_sample_type; /* enum perf_branch_sample_type */
|
||||
|
||||
/*
|
||||
* Defines set of user regs to dump on samples.
|
||||
* See asm/perf_regs.h for details.
|
||||
*/
|
||||
__u64 sample_regs_user;
|
||||
|
||||
/*
|
||||
* Defines size of the user stack to dump on samples.
|
||||
*/
|
||||
__u32 sample_stack_user;
|
||||
|
||||
/* Align to u64. */
|
||||
__u32 __reserved_2;
|
||||
};
|
||||
|
||||
#define perf_flags(attr) (*(&(attr)->read_format + 1))
|
||||
@ -550,6 +580,13 @@ enum perf_event_type {
|
||||
* char data[size];}&& PERF_SAMPLE_RAW
|
||||
*
|
||||
* { u64 from, to, flags } lbr[nr];} && PERF_SAMPLE_BRANCH_STACK
|
||||
*
|
||||
* { u64 abi; # enum perf_sample_regs_abi
|
||||
* u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER
|
||||
*
|
||||
* { u64 size;
|
||||
* char data[size];
|
||||
* u64 dyn_size; } && PERF_SAMPLE_STACK_USER
|
||||
* };
|
||||
*/
|
||||
PERF_RECORD_SAMPLE = 9,
|
||||
@ -611,6 +648,7 @@ struct perf_guest_info_callbacks {
|
||||
#include <linux/static_key.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/sysfs.h>
|
||||
#include <linux/perf_regs.h>
|
||||
#include <asm/local.h>
|
||||
|
||||
struct perf_callchain_entry {
|
||||
@ -656,6 +694,11 @@ struct perf_branch_stack {
|
||||
struct perf_branch_entry entries[0];
|
||||
};
|
||||
|
||||
struct perf_regs_user {
|
||||
__u64 abi;
|
||||
struct pt_regs *regs;
|
||||
};
|
||||
|
||||
struct task_struct;
|
||||
|
||||
/*
|
||||
@ -1135,6 +1178,8 @@ struct perf_sample_data {
|
||||
struct perf_callchain_entry *callchain;
|
||||
struct perf_raw_record *raw;
|
||||
struct perf_branch_stack *br_stack;
|
||||
struct perf_regs_user regs_user;
|
||||
u64 stack_user_size;
|
||||
};
|
||||
|
||||
static inline void perf_sample_data_init(struct perf_sample_data *data,
|
||||
@ -1144,7 +1189,10 @@ static inline void perf_sample_data_init(struct perf_sample_data *data,
|
||||
data->addr = addr;
|
||||
data->raw = NULL;
|
||||
data->br_stack = NULL;
|
||||
data->period = period;
|
||||
data->period = period;
|
||||
data->regs_user.abi = PERF_SAMPLE_REGS_ABI_NONE;
|
||||
data->regs_user.regs = NULL;
|
||||
data->stack_user_size = 0;
|
||||
}
|
||||
|
||||
extern void perf_output_sample(struct perf_output_handle *handle,
|
||||
@ -1292,8 +1340,10 @@ static inline bool has_branch_stack(struct perf_event *event)
|
||||
extern int perf_output_begin(struct perf_output_handle *handle,
|
||||
struct perf_event *event, unsigned int size);
|
||||
extern void perf_output_end(struct perf_output_handle *handle);
|
||||
extern void perf_output_copy(struct perf_output_handle *handle,
|
||||
extern unsigned int perf_output_copy(struct perf_output_handle *handle,
|
||||
const void *buf, unsigned int len);
|
||||
extern unsigned int perf_output_skip(struct perf_output_handle *handle,
|
||||
unsigned int len);
|
||||
extern int perf_swevent_get_recursion_context(void);
|
||||
extern void perf_swevent_put_recursion_context(int rctx);
|
||||
extern void perf_event_enable(struct perf_event *event);
|
||||
|
25
include/linux/perf_regs.h
Normal file
25
include/linux/perf_regs.h
Normal file
@ -0,0 +1,25 @@
|
||||
#ifndef _LINUX_PERF_REGS_H
|
||||
#define _LINUX_PERF_REGS_H
|
||||
|
||||
#ifdef CONFIG_HAVE_PERF_REGS
|
||||
#include <asm/perf_regs.h>
|
||||
u64 perf_reg_value(struct pt_regs *regs, int idx);
|
||||
int perf_reg_validate(u64 mask);
|
||||
u64 perf_reg_abi(struct task_struct *task);
|
||||
#else
|
||||
static inline u64 perf_reg_value(struct pt_regs *regs, int idx)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int perf_reg_validate(u64 mask)
|
||||
{
|
||||
return mask ? -ENOSYS : 0;
|
||||
}
|
||||
|
||||
static inline u64 perf_reg_abi(struct task_struct *task)
|
||||
{
|
||||
return PERF_SAMPLE_REGS_ABI_NONE;
|
||||
}
|
||||
#endif /* CONFIG_HAVE_PERF_REGS */
|
||||
#endif /* _LINUX_PERF_REGS_H */
|
@ -446,6 +446,9 @@ extern int get_dumpable(struct mm_struct *mm);
|
||||
#define MMF_VM_HUGEPAGE 17 /* set when VM_HUGEPAGE is set on vma */
|
||||
#define MMF_EXE_FILE_CHANGED 18 /* see prctl_set_mm_exe_file() */
|
||||
|
||||
#define MMF_HAS_UPROBES 19 /* has uprobes */
|
||||
#define MMF_RECALC_UPROBES 20 /* MMF_HAS_UPROBES can be wrong */
|
||||
|
||||
#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK)
|
||||
|
||||
struct sighand_struct {
|
||||
|
@ -99,25 +99,27 @@ struct xol_area {
|
||||
|
||||
struct uprobes_state {
|
||||
struct xol_area *xol_area;
|
||||
atomic_t count;
|
||||
};
|
||||
|
||||
extern int __weak set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
|
||||
extern int __weak set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr, bool verify);
|
||||
extern int __weak set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
|
||||
extern bool __weak is_swbp_insn(uprobe_opcode_t *insn);
|
||||
extern int uprobe_register(struct inode *inode, loff_t offset, struct uprobe_consumer *uc);
|
||||
extern void uprobe_unregister(struct inode *inode, loff_t offset, struct uprobe_consumer *uc);
|
||||
extern int uprobe_mmap(struct vm_area_struct *vma);
|
||||
extern void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end);
|
||||
extern void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm);
|
||||
extern void uprobe_free_utask(struct task_struct *t);
|
||||
extern void uprobe_copy_process(struct task_struct *t);
|
||||
extern unsigned long __weak uprobe_get_swbp_addr(struct pt_regs *regs);
|
||||
extern void __weak arch_uprobe_enable_step(struct arch_uprobe *arch);
|
||||
extern void __weak arch_uprobe_disable_step(struct arch_uprobe *arch);
|
||||
extern int uprobe_post_sstep_notifier(struct pt_regs *regs);
|
||||
extern int uprobe_pre_sstep_notifier(struct pt_regs *regs);
|
||||
extern void uprobe_notify_resume(struct pt_regs *regs);
|
||||
extern bool uprobe_deny_signal(void);
|
||||
extern bool __weak arch_uprobe_skip_sstep(struct arch_uprobe *aup, struct pt_regs *regs);
|
||||
extern void uprobe_clear_state(struct mm_struct *mm);
|
||||
extern void uprobe_reset_state(struct mm_struct *mm);
|
||||
#else /* !CONFIG_UPROBES */
|
||||
struct uprobes_state {
|
||||
};
|
||||
@ -138,6 +140,10 @@ static inline void
|
||||
uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
|
||||
{
|
||||
}
|
||||
static inline void
|
||||
uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm)
|
||||
{
|
||||
}
|
||||
static inline void uprobe_notify_resume(struct pt_regs *regs)
|
||||
{
|
||||
}
|
||||
@ -158,8 +164,5 @@ static inline void uprobe_copy_process(struct task_struct *t)
|
||||
static inline void uprobe_clear_state(struct mm_struct *mm)
|
||||
{
|
||||
}
|
||||
static inline void uprobe_reset_state(struct mm_struct *mm)
|
||||
{
|
||||
}
|
||||
#endif /* !CONFIG_UPROBES */
|
||||
#endif /* _LINUX_UPROBES_H */
|
||||
|
@ -97,7 +97,7 @@ obj-$(CONFIG_COMPAT_BINFMT_ELF) += elfcore.o
|
||||
obj-$(CONFIG_BINFMT_ELF_FDPIC) += elfcore.o
|
||||
obj-$(CONFIG_FUNCTION_TRACER) += trace/
|
||||
obj-$(CONFIG_TRACING) += trace/
|
||||
obj-$(CONFIG_X86_DS) += trace/
|
||||
obj-$(CONFIG_TRACE_CLOCK) += trace/
|
||||
obj-$(CONFIG_RING_BUFFER) += trace/
|
||||
obj-$(CONFIG_TRACEPOINTS) += trace/
|
||||
obj-$(CONFIG_IRQ_WORK) += irq_work.o
|
||||
|
@ -159,6 +159,11 @@ perf_callchain(struct perf_event *event, struct pt_regs *regs)
|
||||
int rctx;
|
||||
struct perf_callchain_entry *entry;
|
||||
|
||||
int kernel = !event->attr.exclude_callchain_kernel;
|
||||
int user = !event->attr.exclude_callchain_user;
|
||||
|
||||
if (!kernel && !user)
|
||||
return NULL;
|
||||
|
||||
entry = get_callchain_entry(&rctx);
|
||||
if (rctx == -1)
|
||||
@ -169,24 +174,29 @@ perf_callchain(struct perf_event *event, struct pt_regs *regs)
|
||||
|
||||
entry->nr = 0;
|
||||
|
||||
if (!user_mode(regs)) {
|
||||
if (kernel && !user_mode(regs)) {
|
||||
perf_callchain_store(entry, PERF_CONTEXT_KERNEL);
|
||||
perf_callchain_kernel(entry, regs);
|
||||
if (current->mm)
|
||||
regs = task_pt_regs(current);
|
||||
else
|
||||
regs = NULL;
|
||||
}
|
||||
|
||||
if (regs) {
|
||||
/*
|
||||
* Disallow cross-task user callchains.
|
||||
*/
|
||||
if (event->ctx->task && event->ctx->task != current)
|
||||
goto exit_put;
|
||||
if (user) {
|
||||
if (!user_mode(regs)) {
|
||||
if (current->mm)
|
||||
regs = task_pt_regs(current);
|
||||
else
|
||||
regs = NULL;
|
||||
}
|
||||
|
||||
perf_callchain_store(entry, PERF_CONTEXT_USER);
|
||||
perf_callchain_user(entry, regs);
|
||||
if (regs) {
|
||||
/*
|
||||
* Disallow cross-task user callchains.
|
||||
*/
|
||||
if (event->ctx->task && event->ctx->task != current)
|
||||
goto exit_put;
|
||||
|
||||
perf_callchain_store(entry, PERF_CONTEXT_USER);
|
||||
perf_callchain_user(entry, regs);
|
||||
}
|
||||
}
|
||||
|
||||
exit_put:
|
||||
|
@ -36,6 +36,7 @@
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/ftrace_event.h>
|
||||
#include <linux/hw_breakpoint.h>
|
||||
#include <linux/mm_types.h>
|
||||
|
||||
#include "internal.h"
|
||||
|
||||
@ -3764,6 +3765,132 @@ int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *cbs)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(perf_unregister_guest_info_callbacks);
|
||||
|
||||
static void
|
||||
perf_output_sample_regs(struct perf_output_handle *handle,
|
||||
struct pt_regs *regs, u64 mask)
|
||||
{
|
||||
int bit;
|
||||
|
||||
for_each_set_bit(bit, (const unsigned long *) &mask,
|
||||
sizeof(mask) * BITS_PER_BYTE) {
|
||||
u64 val;
|
||||
|
||||
val = perf_reg_value(regs, bit);
|
||||
perf_output_put(handle, val);
|
||||
}
|
||||
}
|
||||
|
||||
static void perf_sample_regs_user(struct perf_regs_user *regs_user,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
if (!user_mode(regs)) {
|
||||
if (current->mm)
|
||||
regs = task_pt_regs(current);
|
||||
else
|
||||
regs = NULL;
|
||||
}
|
||||
|
||||
if (regs) {
|
||||
regs_user->regs = regs;
|
||||
regs_user->abi = perf_reg_abi(current);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Get remaining task size from user stack pointer.
|
||||
*
|
||||
* It'd be better to take stack vma map and limit this more
|
||||
* precisly, but there's no way to get it safely under interrupt,
|
||||
* so using TASK_SIZE as limit.
|
||||
*/
|
||||
static u64 perf_ustack_task_size(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long addr = perf_user_stack_pointer(regs);
|
||||
|
||||
if (!addr || addr >= TASK_SIZE)
|
||||
return 0;
|
||||
|
||||
return TASK_SIZE - addr;
|
||||
}
|
||||
|
||||
static u16
|
||||
perf_sample_ustack_size(u16 stack_size, u16 header_size,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
u64 task_size;
|
||||
|
||||
/* No regs, no stack pointer, no dump. */
|
||||
if (!regs)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Check if we fit in with the requested stack size into the:
|
||||
* - TASK_SIZE
|
||||
* If we don't, we limit the size to the TASK_SIZE.
|
||||
*
|
||||
* - remaining sample size
|
||||
* If we don't, we customize the stack size to
|
||||
* fit in to the remaining sample size.
|
||||
*/
|
||||
|
||||
task_size = min((u64) USHRT_MAX, perf_ustack_task_size(regs));
|
||||
stack_size = min(stack_size, (u16) task_size);
|
||||
|
||||
/* Current header size plus static size and dynamic size. */
|
||||
header_size += 2 * sizeof(u64);
|
||||
|
||||
/* Do we fit in with the current stack dump size? */
|
||||
if ((u16) (header_size + stack_size) < header_size) {
|
||||
/*
|
||||
* If we overflow the maximum size for the sample,
|
||||
* we customize the stack dump size to fit in.
|
||||
*/
|
||||
stack_size = USHRT_MAX - header_size - sizeof(u64);
|
||||
stack_size = round_up(stack_size, sizeof(u64));
|
||||
}
|
||||
|
||||
return stack_size;
|
||||
}
|
||||
|
||||
static void
|
||||
perf_output_sample_ustack(struct perf_output_handle *handle, u64 dump_size,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
/* Case of a kernel thread, nothing to dump */
|
||||
if (!regs) {
|
||||
u64 size = 0;
|
||||
perf_output_put(handle, size);
|
||||
} else {
|
||||
unsigned long sp;
|
||||
unsigned int rem;
|
||||
u64 dyn_size;
|
||||
|
||||
/*
|
||||
* We dump:
|
||||
* static size
|
||||
* - the size requested by user or the best one we can fit
|
||||
* in to the sample max size
|
||||
* data
|
||||
* - user stack dump data
|
||||
* dynamic size
|
||||
* - the actual dumped size
|
||||
*/
|
||||
|
||||
/* Static size. */
|
||||
perf_output_put(handle, dump_size);
|
||||
|
||||
/* Data. */
|
||||
sp = perf_user_stack_pointer(regs);
|
||||
rem = __output_copy_user(handle, (void *) sp, dump_size);
|
||||
dyn_size = dump_size - rem;
|
||||
|
||||
perf_output_skip(handle, rem);
|
||||
|
||||
/* Dynamic size. */
|
||||
perf_output_put(handle, dyn_size);
|
||||
}
|
||||
}
|
||||
|
||||
static void __perf_event_header__init_id(struct perf_event_header *header,
|
||||
struct perf_sample_data *data,
|
||||
struct perf_event *event)
|
||||
@ -4024,6 +4151,28 @@ void perf_output_sample(struct perf_output_handle *handle,
|
||||
perf_output_put(handle, nr);
|
||||
}
|
||||
}
|
||||
|
||||
if (sample_type & PERF_SAMPLE_REGS_USER) {
|
||||
u64 abi = data->regs_user.abi;
|
||||
|
||||
/*
|
||||
* If there are no regs to dump, notice it through
|
||||
* first u64 being zero (PERF_SAMPLE_REGS_ABI_NONE).
|
||||
*/
|
||||
perf_output_put(handle, abi);
|
||||
|
||||
if (abi) {
|
||||
u64 mask = event->attr.sample_regs_user;
|
||||
perf_output_sample_regs(handle,
|
||||
data->regs_user.regs,
|
||||
mask);
|
||||
}
|
||||
}
|
||||
|
||||
if (sample_type & PERF_SAMPLE_STACK_USER)
|
||||
perf_output_sample_ustack(handle,
|
||||
data->stack_user_size,
|
||||
data->regs_user.regs);
|
||||
}
|
||||
|
||||
void perf_prepare_sample(struct perf_event_header *header,
|
||||
@ -4075,6 +4224,49 @@ void perf_prepare_sample(struct perf_event_header *header,
|
||||
}
|
||||
header->size += size;
|
||||
}
|
||||
|
||||
if (sample_type & PERF_SAMPLE_REGS_USER) {
|
||||
/* regs dump ABI info */
|
||||
int size = sizeof(u64);
|
||||
|
||||
perf_sample_regs_user(&data->regs_user, regs);
|
||||
|
||||
if (data->regs_user.regs) {
|
||||
u64 mask = event->attr.sample_regs_user;
|
||||
size += hweight64(mask) * sizeof(u64);
|
||||
}
|
||||
|
||||
header->size += size;
|
||||
}
|
||||
|
||||
if (sample_type & PERF_SAMPLE_STACK_USER) {
|
||||
/*
|
||||
* Either we need PERF_SAMPLE_STACK_USER bit to be allways
|
||||
* processed as the last one or have additional check added
|
||||
* in case new sample type is added, because we could eat
|
||||
* up the rest of the sample size.
|
||||
*/
|
||||
struct perf_regs_user *uregs = &data->regs_user;
|
||||
u16 stack_size = event->attr.sample_stack_user;
|
||||
u16 size = sizeof(u64);
|
||||
|
||||
if (!uregs->abi)
|
||||
perf_sample_regs_user(uregs, regs);
|
||||
|
||||
stack_size = perf_sample_ustack_size(stack_size, header->size,
|
||||
uregs->regs);
|
||||
|
||||
/*
|
||||
* If there is something to dump, add space for the dump
|
||||
* itself and for the field that tells the dynamic size,
|
||||
* which is how many have been actually dumped.
|
||||
*/
|
||||
if (stack_size)
|
||||
size += sizeof(u64) + stack_size;
|
||||
|
||||
data->stack_user_size = stack_size;
|
||||
header->size += size;
|
||||
}
|
||||
}
|
||||
|
||||
static void perf_event_output(struct perf_event *event,
|
||||
@ -6151,6 +6343,28 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr,
|
||||
attr->branch_sample_type = mask;
|
||||
}
|
||||
}
|
||||
|
||||
if (attr->sample_type & PERF_SAMPLE_REGS_USER) {
|
||||
ret = perf_reg_validate(attr->sample_regs_user);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (attr->sample_type & PERF_SAMPLE_STACK_USER) {
|
||||
if (!arch_perf_have_user_stack_dump())
|
||||
return -ENOSYS;
|
||||
|
||||
/*
|
||||
* We have __u32 type for the size, but so far
|
||||
* we can only use __u16 as maximum due to the
|
||||
* __u16 sample size limit.
|
||||
*/
|
||||
if (attr->sample_stack_user >= USHRT_MAX)
|
||||
ret = -EINVAL;
|
||||
else if (!IS_ALIGNED(attr->sample_stack_user, sizeof(u64)))
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
out:
|
||||
return ret;
|
||||
|
||||
|
@ -2,6 +2,7 @@
|
||||
#define _KERNEL_EVENTS_INTERNAL_H
|
||||
|
||||
#include <linux/hardirq.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
/* Buffer handling */
|
||||
|
||||
@ -76,30 +77,53 @@ static inline unsigned long perf_data_size(struct ring_buffer *rb)
|
||||
return rb->nr_pages << (PAGE_SHIFT + page_order(rb));
|
||||
}
|
||||
|
||||
static inline void
|
||||
__output_copy(struct perf_output_handle *handle,
|
||||
const void *buf, unsigned int len)
|
||||
{
|
||||
do {
|
||||
unsigned long size = min_t(unsigned long, handle->size, len);
|
||||
|
||||
memcpy(handle->addr, buf, size);
|
||||
|
||||
len -= size;
|
||||
handle->addr += size;
|
||||
buf += size;
|
||||
handle->size -= size;
|
||||
if (!handle->size) {
|
||||
struct ring_buffer *rb = handle->rb;
|
||||
|
||||
handle->page++;
|
||||
handle->page &= rb->nr_pages - 1;
|
||||
handle->addr = rb->data_pages[handle->page];
|
||||
handle->size = PAGE_SIZE << page_order(rb);
|
||||
}
|
||||
} while (len);
|
||||
#define DEFINE_OUTPUT_COPY(func_name, memcpy_func) \
|
||||
static inline unsigned int \
|
||||
func_name(struct perf_output_handle *handle, \
|
||||
const void *buf, unsigned int len) \
|
||||
{ \
|
||||
unsigned long size, written; \
|
||||
\
|
||||
do { \
|
||||
size = min_t(unsigned long, handle->size, len); \
|
||||
\
|
||||
written = memcpy_func(handle->addr, buf, size); \
|
||||
\
|
||||
len -= written; \
|
||||
handle->addr += written; \
|
||||
buf += written; \
|
||||
handle->size -= written; \
|
||||
if (!handle->size) { \
|
||||
struct ring_buffer *rb = handle->rb; \
|
||||
\
|
||||
handle->page++; \
|
||||
handle->page &= rb->nr_pages - 1; \
|
||||
handle->addr = rb->data_pages[handle->page]; \
|
||||
handle->size = PAGE_SIZE << page_order(rb); \
|
||||
} \
|
||||
} while (len && written == size); \
|
||||
\
|
||||
return len; \
|
||||
}
|
||||
|
||||
static inline int memcpy_common(void *dst, const void *src, size_t n)
|
||||
{
|
||||
memcpy(dst, src, n);
|
||||
return n;
|
||||
}
|
||||
|
||||
DEFINE_OUTPUT_COPY(__output_copy, memcpy_common)
|
||||
|
||||
#define MEMCPY_SKIP(dst, src, n) (n)
|
||||
|
||||
DEFINE_OUTPUT_COPY(__output_skip, MEMCPY_SKIP)
|
||||
|
||||
#ifndef arch_perf_out_copy_user
|
||||
#define arch_perf_out_copy_user __copy_from_user_inatomic
|
||||
#endif
|
||||
|
||||
DEFINE_OUTPUT_COPY(__output_copy_user, arch_perf_out_copy_user)
|
||||
|
||||
/* Callchain handling */
|
||||
extern struct perf_callchain_entry *
|
||||
perf_callchain(struct perf_event *event, struct pt_regs *regs);
|
||||
@ -134,4 +158,20 @@ static inline void put_recursion_context(int *recursion, int rctx)
|
||||
recursion[rctx]--;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HAVE_PERF_USER_STACK_DUMP
|
||||
static inline bool arch_perf_have_user_stack_dump(void)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
#define perf_user_stack_pointer(regs) user_stack_pointer(regs)
|
||||
#else
|
||||
static inline bool arch_perf_have_user_stack_dump(void)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
#define perf_user_stack_pointer(regs) 0
|
||||
#endif /* CONFIG_HAVE_PERF_USER_STACK_DUMP */
|
||||
|
||||
#endif /* _KERNEL_EVENTS_INTERNAL_H */
|
||||
|
@ -182,10 +182,16 @@ out:
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
void perf_output_copy(struct perf_output_handle *handle,
|
||||
unsigned int perf_output_copy(struct perf_output_handle *handle,
|
||||
const void *buf, unsigned int len)
|
||||
{
|
||||
__output_copy(handle, buf, len);
|
||||
return __output_copy(handle, buf, len);
|
||||
}
|
||||
|
||||
unsigned int perf_output_skip(struct perf_output_handle *handle,
|
||||
unsigned int len)
|
||||
{
|
||||
return __output_skip(handle, NULL, len);
|
||||
}
|
||||
|
||||
void perf_output_end(struct perf_output_handle *handle)
|
||||
|
@ -280,12 +280,10 @@ static int read_opcode(struct mm_struct *mm, unsigned long vaddr, uprobe_opcode_
|
||||
if (ret <= 0)
|
||||
return ret;
|
||||
|
||||
lock_page(page);
|
||||
vaddr_new = kmap_atomic(page);
|
||||
vaddr &= ~PAGE_MASK;
|
||||
memcpy(opcode, vaddr_new + vaddr, UPROBE_SWBP_INSN_SIZE);
|
||||
kunmap_atomic(vaddr_new);
|
||||
unlock_page(page);
|
||||
|
||||
put_page(page);
|
||||
|
||||
@ -334,7 +332,7 @@ int __weak set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned
|
||||
*/
|
||||
result = is_swbp_at_addr(mm, vaddr);
|
||||
if (result == 1)
|
||||
return -EEXIST;
|
||||
return 0;
|
||||
|
||||
if (result)
|
||||
return result;
|
||||
@ -347,24 +345,22 @@ int __weak set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned
|
||||
* @mm: the probed process address space.
|
||||
* @auprobe: arch specific probepoint information.
|
||||
* @vaddr: the virtual address to insert the opcode.
|
||||
* @verify: if true, verify existance of breakpoint instruction.
|
||||
*
|
||||
* For mm @mm, restore the original opcode (opcode) at @vaddr.
|
||||
* Return 0 (success) or a negative errno.
|
||||
*/
|
||||
int __weak
|
||||
set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr, bool verify)
|
||||
set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr)
|
||||
{
|
||||
if (verify) {
|
||||
int result;
|
||||
int result;
|
||||
|
||||
result = is_swbp_at_addr(mm, vaddr);
|
||||
if (!result)
|
||||
return -EINVAL;
|
||||
result = is_swbp_at_addr(mm, vaddr);
|
||||
if (!result)
|
||||
return -EINVAL;
|
||||
|
||||
if (result != 1)
|
||||
return result;
|
||||
|
||||
if (result != 1)
|
||||
return result;
|
||||
}
|
||||
return write_opcode(auprobe, mm, vaddr, *(uprobe_opcode_t *)auprobe->insn);
|
||||
}
|
||||
|
||||
@ -415,11 +411,10 @@ static struct uprobe *__find_uprobe(struct inode *inode, loff_t offset)
|
||||
static struct uprobe *find_uprobe(struct inode *inode, loff_t offset)
|
||||
{
|
||||
struct uprobe *uprobe;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&uprobes_treelock, flags);
|
||||
spin_lock(&uprobes_treelock);
|
||||
uprobe = __find_uprobe(inode, offset);
|
||||
spin_unlock_irqrestore(&uprobes_treelock, flags);
|
||||
spin_unlock(&uprobes_treelock);
|
||||
|
||||
return uprobe;
|
||||
}
|
||||
@ -466,12 +461,11 @@ static struct uprobe *__insert_uprobe(struct uprobe *uprobe)
|
||||
*/
|
||||
static struct uprobe *insert_uprobe(struct uprobe *uprobe)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct uprobe *u;
|
||||
|
||||
spin_lock_irqsave(&uprobes_treelock, flags);
|
||||
spin_lock(&uprobes_treelock);
|
||||
u = __insert_uprobe(uprobe);
|
||||
spin_unlock_irqrestore(&uprobes_treelock, flags);
|
||||
spin_unlock(&uprobes_treelock);
|
||||
|
||||
/* For now assume that the instruction need not be single-stepped */
|
||||
uprobe->flags |= UPROBE_SKIP_SSTEP;
|
||||
@ -649,6 +643,7 @@ static int
|
||||
install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm,
|
||||
struct vm_area_struct *vma, unsigned long vaddr)
|
||||
{
|
||||
bool first_uprobe;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
@ -659,7 +654,7 @@ install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm,
|
||||
* Hence behave as if probe already existed.
|
||||
*/
|
||||
if (!uprobe->consumers)
|
||||
return -EEXIST;
|
||||
return 0;
|
||||
|
||||
if (!(uprobe->flags & UPROBE_COPY_INSN)) {
|
||||
ret = copy_insn(uprobe, vma->vm_file);
|
||||
@ -681,17 +676,18 @@ install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm,
|
||||
}
|
||||
|
||||
/*
|
||||
* Ideally, should be updating the probe count after the breakpoint
|
||||
* has been successfully inserted. However a thread could hit the
|
||||
* breakpoint we just inserted even before the probe count is
|
||||
* incremented. If this is the first breakpoint placed, breakpoint
|
||||
* notifier might ignore uprobes and pass the trap to the thread.
|
||||
* Hence increment before and decrement on failure.
|
||||
* set MMF_HAS_UPROBES in advance for uprobe_pre_sstep_notifier(),
|
||||
* the task can hit this breakpoint right after __replace_page().
|
||||
*/
|
||||
atomic_inc(&mm->uprobes_state.count);
|
||||
first_uprobe = !test_bit(MMF_HAS_UPROBES, &mm->flags);
|
||||
if (first_uprobe)
|
||||
set_bit(MMF_HAS_UPROBES, &mm->flags);
|
||||
|
||||
ret = set_swbp(&uprobe->arch, mm, vaddr);
|
||||
if (ret)
|
||||
atomic_dec(&mm->uprobes_state.count);
|
||||
if (!ret)
|
||||
clear_bit(MMF_RECALC_UPROBES, &mm->flags);
|
||||
else if (first_uprobe)
|
||||
clear_bit(MMF_HAS_UPROBES, &mm->flags);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -699,8 +695,12 @@ install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm,
|
||||
static void
|
||||
remove_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, unsigned long vaddr)
|
||||
{
|
||||
if (!set_orig_insn(&uprobe->arch, mm, vaddr, true))
|
||||
atomic_dec(&mm->uprobes_state.count);
|
||||
/* can happen if uprobe_register() fails */
|
||||
if (!test_bit(MMF_HAS_UPROBES, &mm->flags))
|
||||
return;
|
||||
|
||||
set_bit(MMF_RECALC_UPROBES, &mm->flags);
|
||||
set_orig_insn(&uprobe->arch, mm, vaddr);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -710,11 +710,9 @@ remove_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, unsigned long vad
|
||||
*/
|
||||
static void delete_uprobe(struct uprobe *uprobe)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&uprobes_treelock, flags);
|
||||
spin_lock(&uprobes_treelock);
|
||||
rb_erase(&uprobe->rb_node, &uprobes_tree);
|
||||
spin_unlock_irqrestore(&uprobes_treelock, flags);
|
||||
spin_unlock(&uprobes_treelock);
|
||||
iput(uprobe->inode);
|
||||
put_uprobe(uprobe);
|
||||
atomic_dec(&uprobe_events);
|
||||
@ -831,17 +829,11 @@ static int register_for_each_vma(struct uprobe *uprobe, bool is_register)
|
||||
vaddr_to_offset(vma, info->vaddr) != uprobe->offset)
|
||||
goto unlock;
|
||||
|
||||
if (is_register) {
|
||||
if (is_register)
|
||||
err = install_breakpoint(uprobe, mm, vma, info->vaddr);
|
||||
/*
|
||||
* We can race against uprobe_mmap(), see the
|
||||
* comment near uprobe_hash().
|
||||
*/
|
||||
if (err == -EEXIST)
|
||||
err = 0;
|
||||
} else {
|
||||
else
|
||||
remove_breakpoint(uprobe, mm, info->vaddr);
|
||||
}
|
||||
|
||||
unlock:
|
||||
up_write(&mm->mmap_sem);
|
||||
free:
|
||||
@ -908,7 +900,8 @@ int uprobe_register(struct inode *inode, loff_t offset, struct uprobe_consumer *
|
||||
}
|
||||
|
||||
mutex_unlock(uprobes_hash(inode));
|
||||
put_uprobe(uprobe);
|
||||
if (uprobe)
|
||||
put_uprobe(uprobe);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -978,7 +971,6 @@ static void build_probe_list(struct inode *inode,
|
||||
struct list_head *head)
|
||||
{
|
||||
loff_t min, max;
|
||||
unsigned long flags;
|
||||
struct rb_node *n, *t;
|
||||
struct uprobe *u;
|
||||
|
||||
@ -986,7 +978,7 @@ static void build_probe_list(struct inode *inode,
|
||||
min = vaddr_to_offset(vma, start);
|
||||
max = min + (end - start) - 1;
|
||||
|
||||
spin_lock_irqsave(&uprobes_treelock, flags);
|
||||
spin_lock(&uprobes_treelock);
|
||||
n = find_node_in_range(inode, min, max);
|
||||
if (n) {
|
||||
for (t = n; t; t = rb_prev(t)) {
|
||||
@ -1004,27 +996,20 @@ static void build_probe_list(struct inode *inode,
|
||||
atomic_inc(&u->ref);
|
||||
}
|
||||
}
|
||||
spin_unlock_irqrestore(&uprobes_treelock, flags);
|
||||
spin_unlock(&uprobes_treelock);
|
||||
}
|
||||
|
||||
/*
|
||||
* Called from mmap_region.
|
||||
* called with mm->mmap_sem acquired.
|
||||
* Called from mmap_region/vma_adjust with mm->mmap_sem acquired.
|
||||
*
|
||||
* Return -ve no if we fail to insert probes and we cannot
|
||||
* bail-out.
|
||||
* Return 0 otherwise. i.e:
|
||||
*
|
||||
* - successful insertion of probes
|
||||
* - (or) no possible probes to be inserted.
|
||||
* - (or) insertion of probes failed but we can bail-out.
|
||||
* Currently we ignore all errors and always return 0, the callers
|
||||
* can't handle the failure anyway.
|
||||
*/
|
||||
int uprobe_mmap(struct vm_area_struct *vma)
|
||||
{
|
||||
struct list_head tmp_list;
|
||||
struct uprobe *uprobe, *u;
|
||||
struct inode *inode;
|
||||
int ret, count;
|
||||
|
||||
if (!atomic_read(&uprobe_events) || !valid_vma(vma, true))
|
||||
return 0;
|
||||
@ -1036,44 +1021,35 @@ int uprobe_mmap(struct vm_area_struct *vma)
|
||||
mutex_lock(uprobes_mmap_hash(inode));
|
||||
build_probe_list(inode, vma, vma->vm_start, vma->vm_end, &tmp_list);
|
||||
|
||||
ret = 0;
|
||||
count = 0;
|
||||
|
||||
list_for_each_entry_safe(uprobe, u, &tmp_list, pending_list) {
|
||||
if (!ret) {
|
||||
if (!fatal_signal_pending(current)) {
|
||||
unsigned long vaddr = offset_to_vaddr(vma, uprobe->offset);
|
||||
|
||||
ret = install_breakpoint(uprobe, vma->vm_mm, vma, vaddr);
|
||||
/*
|
||||
* We can race against uprobe_register(), see the
|
||||
* comment near uprobe_hash().
|
||||
*/
|
||||
if (ret == -EEXIST) {
|
||||
ret = 0;
|
||||
|
||||
if (!is_swbp_at_addr(vma->vm_mm, vaddr))
|
||||
continue;
|
||||
|
||||
/*
|
||||
* Unable to insert a breakpoint, but
|
||||
* breakpoint lies underneath. Increment the
|
||||
* probe count.
|
||||
*/
|
||||
atomic_inc(&vma->vm_mm->uprobes_state.count);
|
||||
}
|
||||
|
||||
if (!ret)
|
||||
count++;
|
||||
install_breakpoint(uprobe, vma->vm_mm, vma, vaddr);
|
||||
}
|
||||
put_uprobe(uprobe);
|
||||
}
|
||||
|
||||
mutex_unlock(uprobes_mmap_hash(inode));
|
||||
|
||||
if (ret)
|
||||
atomic_sub(count, &vma->vm_mm->uprobes_state.count);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
static bool
|
||||
vma_has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long end)
|
||||
{
|
||||
loff_t min, max;
|
||||
struct inode *inode;
|
||||
struct rb_node *n;
|
||||
|
||||
inode = vma->vm_file->f_mapping->host;
|
||||
|
||||
min = vaddr_to_offset(vma, start);
|
||||
max = min + (end - start) - 1;
|
||||
|
||||
spin_lock(&uprobes_treelock);
|
||||
n = find_node_in_range(inode, min, max);
|
||||
spin_unlock(&uprobes_treelock);
|
||||
|
||||
return !!n;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1081,37 +1057,18 @@ int uprobe_mmap(struct vm_area_struct *vma)
|
||||
*/
|
||||
void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
|
||||
{
|
||||
struct list_head tmp_list;
|
||||
struct uprobe *uprobe, *u;
|
||||
struct inode *inode;
|
||||
|
||||
if (!atomic_read(&uprobe_events) || !valid_vma(vma, false))
|
||||
return;
|
||||
|
||||
if (!atomic_read(&vma->vm_mm->mm_users)) /* called by mmput() ? */
|
||||
return;
|
||||
|
||||
if (!atomic_read(&vma->vm_mm->uprobes_state.count))
|
||||
if (!test_bit(MMF_HAS_UPROBES, &vma->vm_mm->flags) ||
|
||||
test_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags))
|
||||
return;
|
||||
|
||||
inode = vma->vm_file->f_mapping->host;
|
||||
if (!inode)
|
||||
return;
|
||||
|
||||
mutex_lock(uprobes_mmap_hash(inode));
|
||||
build_probe_list(inode, vma, start, end, &tmp_list);
|
||||
|
||||
list_for_each_entry_safe(uprobe, u, &tmp_list, pending_list) {
|
||||
unsigned long vaddr = offset_to_vaddr(vma, uprobe->offset);
|
||||
/*
|
||||
* An unregister could have removed the probe before
|
||||
* unmap. So check before we decrement the count.
|
||||
*/
|
||||
if (is_swbp_at_addr(vma->vm_mm, vaddr) == 1)
|
||||
atomic_dec(&vma->vm_mm->uprobes_state.count);
|
||||
put_uprobe(uprobe);
|
||||
}
|
||||
mutex_unlock(uprobes_mmap_hash(inode));
|
||||
if (vma_has_uprobes(vma, start, end))
|
||||
set_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags);
|
||||
}
|
||||
|
||||
/* Slot allocation for XOL */
|
||||
@ -1213,13 +1170,15 @@ void uprobe_clear_state(struct mm_struct *mm)
|
||||
kfree(area);
|
||||
}
|
||||
|
||||
/*
|
||||
* uprobe_reset_state - Free the area allocated for slots.
|
||||
*/
|
||||
void uprobe_reset_state(struct mm_struct *mm)
|
||||
void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm)
|
||||
{
|
||||
mm->uprobes_state.xol_area = NULL;
|
||||
atomic_set(&mm->uprobes_state.count, 0);
|
||||
newmm->uprobes_state.xol_area = NULL;
|
||||
|
||||
if (test_bit(MMF_HAS_UPROBES, &oldmm->flags)) {
|
||||
set_bit(MMF_HAS_UPROBES, &newmm->flags);
|
||||
/* unconditionally, dup_mmap() skips VM_DONTCOPY vmas */
|
||||
set_bit(MMF_RECALC_UPROBES, &newmm->flags);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1437,6 +1396,25 @@ static bool can_skip_sstep(struct uprobe *uprobe, struct pt_regs *regs)
|
||||
return false;
|
||||
}
|
||||
|
||||
static void mmf_recalc_uprobes(struct mm_struct *mm)
|
||||
{
|
||||
struct vm_area_struct *vma;
|
||||
|
||||
for (vma = mm->mmap; vma; vma = vma->vm_next) {
|
||||
if (!valid_vma(vma, false))
|
||||
continue;
|
||||
/*
|
||||
* This is not strictly accurate, we can race with
|
||||
* uprobe_unregister() and see the already removed
|
||||
* uprobe if delete_uprobe() was not yet called.
|
||||
*/
|
||||
if (vma_has_uprobes(vma, vma->vm_start, vma->vm_end))
|
||||
return;
|
||||
}
|
||||
|
||||
clear_bit(MMF_HAS_UPROBES, &mm->flags);
|
||||
}
|
||||
|
||||
static struct uprobe *find_active_uprobe(unsigned long bp_vaddr, int *is_swbp)
|
||||
{
|
||||
struct mm_struct *mm = current->mm;
|
||||
@ -1458,11 +1436,24 @@ static struct uprobe *find_active_uprobe(unsigned long bp_vaddr, int *is_swbp)
|
||||
} else {
|
||||
*is_swbp = -EFAULT;
|
||||
}
|
||||
|
||||
if (!uprobe && test_and_clear_bit(MMF_RECALC_UPROBES, &mm->flags))
|
||||
mmf_recalc_uprobes(mm);
|
||||
up_read(&mm->mmap_sem);
|
||||
|
||||
return uprobe;
|
||||
}
|
||||
|
||||
void __weak arch_uprobe_enable_step(struct arch_uprobe *arch)
|
||||
{
|
||||
user_enable_single_step(current);
|
||||
}
|
||||
|
||||
void __weak arch_uprobe_disable_step(struct arch_uprobe *arch)
|
||||
{
|
||||
user_disable_single_step(current);
|
||||
}
|
||||
|
||||
/*
|
||||
* Run handler and ask thread to singlestep.
|
||||
* Ensure all non-fatal signals cannot interrupt thread while it singlesteps.
|
||||
@ -1509,7 +1500,7 @@ static void handle_swbp(struct pt_regs *regs)
|
||||
|
||||
utask->state = UTASK_SSTEP;
|
||||
if (!pre_ssout(uprobe, regs, bp_vaddr)) {
|
||||
user_enable_single_step(current);
|
||||
arch_uprobe_enable_step(&uprobe->arch);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -1518,17 +1509,15 @@ cleanup_ret:
|
||||
utask->active_uprobe = NULL;
|
||||
utask->state = UTASK_RUNNING;
|
||||
}
|
||||
if (uprobe) {
|
||||
if (!(uprobe->flags & UPROBE_SKIP_SSTEP))
|
||||
if (!(uprobe->flags & UPROBE_SKIP_SSTEP))
|
||||
|
||||
/*
|
||||
* cannot singlestep; cannot skip instruction;
|
||||
* re-execute the instruction.
|
||||
*/
|
||||
instruction_pointer_set(regs, bp_vaddr);
|
||||
/*
|
||||
* cannot singlestep; cannot skip instruction;
|
||||
* re-execute the instruction.
|
||||
*/
|
||||
instruction_pointer_set(regs, bp_vaddr);
|
||||
|
||||
put_uprobe(uprobe);
|
||||
}
|
||||
put_uprobe(uprobe);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1547,10 +1536,10 @@ static void handle_singlestep(struct uprobe_task *utask, struct pt_regs *regs)
|
||||
else
|
||||
WARN_ON_ONCE(1);
|
||||
|
||||
arch_uprobe_disable_step(&uprobe->arch);
|
||||
put_uprobe(uprobe);
|
||||
utask->active_uprobe = NULL;
|
||||
utask->state = UTASK_RUNNING;
|
||||
user_disable_single_step(current);
|
||||
xol_free_insn_slot(current);
|
||||
|
||||
spin_lock_irq(¤t->sighand->siglock);
|
||||
@ -1589,8 +1578,7 @@ int uprobe_pre_sstep_notifier(struct pt_regs *regs)
|
||||
{
|
||||
struct uprobe_task *utask;
|
||||
|
||||
if (!current->mm || !atomic_read(¤t->mm->uprobes_state.count))
|
||||
/* task is currently not uprobed */
|
||||
if (!current->mm || !test_bit(MMF_HAS_UPROBES, ¤t->mm->flags))
|
||||
return 0;
|
||||
|
||||
utask = current->utask;
|
||||
|
@ -353,6 +353,7 @@ static int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
|
||||
|
||||
down_write(&oldmm->mmap_sem);
|
||||
flush_cache_dup_mm(oldmm);
|
||||
uprobe_dup_mmap(oldmm, mm);
|
||||
/*
|
||||
* Not linked in yet - no deadlock potential:
|
||||
*/
|
||||
@ -454,9 +455,6 @@ static int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
|
||||
|
||||
if (retval)
|
||||
goto out;
|
||||
|
||||
if (file)
|
||||
uprobe_mmap(tmp);
|
||||
}
|
||||
/* a new mm has just been created */
|
||||
arch_dup_mmap(oldmm, mm);
|
||||
@ -839,8 +837,6 @@ struct mm_struct *dup_mm(struct task_struct *tsk)
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
mm->pmd_huge_pte = NULL;
|
||||
#endif
|
||||
uprobe_reset_state(mm);
|
||||
|
||||
if (!mm_init(mm, tsk))
|
||||
goto fail_nomem;
|
||||
|
||||
|
267
kernel/kprobes.c
267
kernel/kprobes.c
@ -561,9 +561,9 @@ static __kprobes void kprobe_optimizer(struct work_struct *work)
|
||||
{
|
||||
LIST_HEAD(free_list);
|
||||
|
||||
mutex_lock(&kprobe_mutex);
|
||||
/* Lock modules while optimizing kprobes */
|
||||
mutex_lock(&module_mutex);
|
||||
mutex_lock(&kprobe_mutex);
|
||||
|
||||
/*
|
||||
* Step 1: Unoptimize kprobes and collect cleaned (unused and disarmed)
|
||||
@ -586,8 +586,8 @@ static __kprobes void kprobe_optimizer(struct work_struct *work)
|
||||
/* Step 4: Free cleaned kprobes after quiesence period */
|
||||
do_free_cleaned_kprobes(&free_list);
|
||||
|
||||
mutex_unlock(&kprobe_mutex);
|
||||
mutex_unlock(&module_mutex);
|
||||
mutex_unlock(&kprobe_mutex);
|
||||
|
||||
/* Step 5: Kick optimizer again if needed */
|
||||
if (!list_empty(&optimizing_list) || !list_empty(&unoptimizing_list))
|
||||
@ -759,20 +759,32 @@ static __kprobes void try_to_optimize_kprobe(struct kprobe *p)
|
||||
struct kprobe *ap;
|
||||
struct optimized_kprobe *op;
|
||||
|
||||
/* Impossible to optimize ftrace-based kprobe */
|
||||
if (kprobe_ftrace(p))
|
||||
return;
|
||||
|
||||
/* For preparing optimization, jump_label_text_reserved() is called */
|
||||
jump_label_lock();
|
||||
mutex_lock(&text_mutex);
|
||||
|
||||
ap = alloc_aggr_kprobe(p);
|
||||
if (!ap)
|
||||
return;
|
||||
goto out;
|
||||
|
||||
op = container_of(ap, struct optimized_kprobe, kp);
|
||||
if (!arch_prepared_optinsn(&op->optinsn)) {
|
||||
/* If failed to setup optimizing, fallback to kprobe */
|
||||
arch_remove_optimized_kprobe(op);
|
||||
kfree(op);
|
||||
return;
|
||||
goto out;
|
||||
}
|
||||
|
||||
init_aggr_kprobe(ap, p);
|
||||
optimize_kprobe(ap);
|
||||
optimize_kprobe(ap); /* This just kicks optimizer thread */
|
||||
|
||||
out:
|
||||
mutex_unlock(&text_mutex);
|
||||
jump_label_unlock();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_SYSCTL
|
||||
@ -907,9 +919,64 @@ static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
|
||||
}
|
||||
#endif /* CONFIG_OPTPROBES */
|
||||
|
||||
#ifdef KPROBES_CAN_USE_FTRACE
|
||||
static struct ftrace_ops kprobe_ftrace_ops __read_mostly = {
|
||||
.func = kprobe_ftrace_handler,
|
||||
.flags = FTRACE_OPS_FL_SAVE_REGS,
|
||||
};
|
||||
static int kprobe_ftrace_enabled;
|
||||
|
||||
/* Must ensure p->addr is really on ftrace */
|
||||
static int __kprobes prepare_kprobe(struct kprobe *p)
|
||||
{
|
||||
if (!kprobe_ftrace(p))
|
||||
return arch_prepare_kprobe(p);
|
||||
|
||||
return arch_prepare_kprobe_ftrace(p);
|
||||
}
|
||||
|
||||
/* Caller must lock kprobe_mutex */
|
||||
static void __kprobes arm_kprobe_ftrace(struct kprobe *p)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
|
||||
(unsigned long)p->addr, 0, 0);
|
||||
WARN(ret < 0, "Failed to arm kprobe-ftrace at %p (%d)\n", p->addr, ret);
|
||||
kprobe_ftrace_enabled++;
|
||||
if (kprobe_ftrace_enabled == 1) {
|
||||
ret = register_ftrace_function(&kprobe_ftrace_ops);
|
||||
WARN(ret < 0, "Failed to init kprobe-ftrace (%d)\n", ret);
|
||||
}
|
||||
}
|
||||
|
||||
/* Caller must lock kprobe_mutex */
|
||||
static void __kprobes disarm_kprobe_ftrace(struct kprobe *p)
|
||||
{
|
||||
int ret;
|
||||
|
||||
kprobe_ftrace_enabled--;
|
||||
if (kprobe_ftrace_enabled == 0) {
|
||||
ret = unregister_ftrace_function(&kprobe_ftrace_ops);
|
||||
WARN(ret < 0, "Failed to init kprobe-ftrace (%d)\n", ret);
|
||||
}
|
||||
ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
|
||||
(unsigned long)p->addr, 1, 0);
|
||||
WARN(ret < 0, "Failed to disarm kprobe-ftrace at %p (%d)\n", p->addr, ret);
|
||||
}
|
||||
#else /* !KPROBES_CAN_USE_FTRACE */
|
||||
#define prepare_kprobe(p) arch_prepare_kprobe(p)
|
||||
#define arm_kprobe_ftrace(p) do {} while (0)
|
||||
#define disarm_kprobe_ftrace(p) do {} while (0)
|
||||
#endif
|
||||
|
||||
/* Arm a kprobe with text_mutex */
|
||||
static void __kprobes arm_kprobe(struct kprobe *kp)
|
||||
{
|
||||
if (unlikely(kprobe_ftrace(kp))) {
|
||||
arm_kprobe_ftrace(kp);
|
||||
return;
|
||||
}
|
||||
/*
|
||||
* Here, since __arm_kprobe() doesn't use stop_machine(),
|
||||
* this doesn't cause deadlock on text_mutex. So, we don't
|
||||
@ -921,11 +988,15 @@ static void __kprobes arm_kprobe(struct kprobe *kp)
|
||||
}
|
||||
|
||||
/* Disarm a kprobe with text_mutex */
|
||||
static void __kprobes disarm_kprobe(struct kprobe *kp)
|
||||
static void __kprobes disarm_kprobe(struct kprobe *kp, bool reopt)
|
||||
{
|
||||
if (unlikely(kprobe_ftrace(kp))) {
|
||||
disarm_kprobe_ftrace(kp);
|
||||
return;
|
||||
}
|
||||
/* Ditto */
|
||||
mutex_lock(&text_mutex);
|
||||
__disarm_kprobe(kp, true);
|
||||
__disarm_kprobe(kp, reopt);
|
||||
mutex_unlock(&text_mutex);
|
||||
}
|
||||
|
||||
@ -1144,12 +1215,6 @@ static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
|
||||
if (p->post_handler && !ap->post_handler)
|
||||
ap->post_handler = aggr_post_handler;
|
||||
|
||||
if (kprobe_disabled(ap) && !kprobe_disabled(p)) {
|
||||
ap->flags &= ~KPROBE_FLAG_DISABLED;
|
||||
if (!kprobes_all_disarmed)
|
||||
/* Arm the breakpoint again. */
|
||||
__arm_kprobe(ap);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1189,11 +1254,22 @@ static int __kprobes register_aggr_kprobe(struct kprobe *orig_p,
|
||||
int ret = 0;
|
||||
struct kprobe *ap = orig_p;
|
||||
|
||||
/* For preparing optimization, jump_label_text_reserved() is called */
|
||||
jump_label_lock();
|
||||
/*
|
||||
* Get online CPUs to avoid text_mutex deadlock.with stop machine,
|
||||
* which is invoked by unoptimize_kprobe() in add_new_kprobe()
|
||||
*/
|
||||
get_online_cpus();
|
||||
mutex_lock(&text_mutex);
|
||||
|
||||
if (!kprobe_aggrprobe(orig_p)) {
|
||||
/* If orig_p is not an aggr_kprobe, create new aggr_kprobe. */
|
||||
ap = alloc_aggr_kprobe(orig_p);
|
||||
if (!ap)
|
||||
return -ENOMEM;
|
||||
if (!ap) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
init_aggr_kprobe(ap, orig_p);
|
||||
} else if (kprobe_unused(ap))
|
||||
/* This probe is going to die. Rescue it */
|
||||
@ -1213,7 +1289,7 @@ static int __kprobes register_aggr_kprobe(struct kprobe *orig_p,
|
||||
* free aggr_probe. It will be used next time, or
|
||||
* freed by unregister_kprobe.
|
||||
*/
|
||||
return ret;
|
||||
goto out;
|
||||
|
||||
/* Prepare optimized instructions if possible. */
|
||||
prepare_optimized_kprobe(ap);
|
||||
@ -1228,7 +1304,20 @@ static int __kprobes register_aggr_kprobe(struct kprobe *orig_p,
|
||||
|
||||
/* Copy ap's insn slot to p */
|
||||
copy_kprobe(ap, p);
|
||||
return add_new_kprobe(ap, p);
|
||||
ret = add_new_kprobe(ap, p);
|
||||
|
||||
out:
|
||||
mutex_unlock(&text_mutex);
|
||||
put_online_cpus();
|
||||
jump_label_unlock();
|
||||
|
||||
if (ret == 0 && kprobe_disabled(ap) && !kprobe_disabled(p)) {
|
||||
ap->flags &= ~KPROBE_FLAG_DISABLED;
|
||||
if (!kprobes_all_disarmed)
|
||||
/* Arm the breakpoint again. */
|
||||
arm_kprobe(ap);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int __kprobes in_kprobes_functions(unsigned long addr)
|
||||
@ -1313,13 +1402,77 @@ static inline int check_kprobe_rereg(struct kprobe *p)
|
||||
return ret;
|
||||
}
|
||||
|
||||
int __kprobes register_kprobe(struct kprobe *p)
|
||||
static __kprobes int check_kprobe_address_safe(struct kprobe *p,
|
||||
struct module **probed_mod)
|
||||
{
|
||||
int ret = 0;
|
||||
unsigned long ftrace_addr;
|
||||
|
||||
/*
|
||||
* If the address is located on a ftrace nop, set the
|
||||
* breakpoint to the following instruction.
|
||||
*/
|
||||
ftrace_addr = ftrace_location((unsigned long)p->addr);
|
||||
if (ftrace_addr) {
|
||||
#ifdef KPROBES_CAN_USE_FTRACE
|
||||
/* Given address is not on the instruction boundary */
|
||||
if ((unsigned long)p->addr != ftrace_addr)
|
||||
return -EILSEQ;
|
||||
p->flags |= KPROBE_FLAG_FTRACE;
|
||||
#else /* !KPROBES_CAN_USE_FTRACE */
|
||||
return -EINVAL;
|
||||
#endif
|
||||
}
|
||||
|
||||
jump_label_lock();
|
||||
preempt_disable();
|
||||
|
||||
/* Ensure it is not in reserved area nor out of text */
|
||||
if (!kernel_text_address((unsigned long) p->addr) ||
|
||||
in_kprobes_functions((unsigned long) p->addr) ||
|
||||
jump_label_text_reserved(p->addr, p->addr)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Check if are we probing a module */
|
||||
*probed_mod = __module_text_address((unsigned long) p->addr);
|
||||
if (*probed_mod) {
|
||||
/*
|
||||
* We must hold a refcount of the probed module while updating
|
||||
* its code to prohibit unexpected unloading.
|
||||
*/
|
||||
if (unlikely(!try_module_get(*probed_mod))) {
|
||||
ret = -ENOENT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the module freed .init.text, we couldn't insert
|
||||
* kprobes in there.
|
||||
*/
|
||||
if (within_module_init((unsigned long)p->addr, *probed_mod) &&
|
||||
(*probed_mod)->state != MODULE_STATE_COMING) {
|
||||
module_put(*probed_mod);
|
||||
*probed_mod = NULL;
|
||||
ret = -ENOENT;
|
||||
}
|
||||
}
|
||||
out:
|
||||
preempt_enable();
|
||||
jump_label_unlock();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int __kprobes register_kprobe(struct kprobe *p)
|
||||
{
|
||||
int ret;
|
||||
struct kprobe *old_p;
|
||||
struct module *probed_mod;
|
||||
kprobe_opcode_t *addr;
|
||||
|
||||
/* Adjust probe address from symbol */
|
||||
addr = kprobe_addr(p);
|
||||
if (IS_ERR(addr))
|
||||
return PTR_ERR(addr);
|
||||
@ -1329,56 +1482,17 @@ int __kprobes register_kprobe(struct kprobe *p)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
jump_label_lock();
|
||||
preempt_disable();
|
||||
if (!kernel_text_address((unsigned long) p->addr) ||
|
||||
in_kprobes_functions((unsigned long) p->addr) ||
|
||||
ftrace_text_reserved(p->addr, p->addr) ||
|
||||
jump_label_text_reserved(p->addr, p->addr)) {
|
||||
ret = -EINVAL;
|
||||
goto cannot_probe;
|
||||
}
|
||||
|
||||
/* User can pass only KPROBE_FLAG_DISABLED to register_kprobe */
|
||||
p->flags &= KPROBE_FLAG_DISABLED;
|
||||
|
||||
/*
|
||||
* Check if are we probing a module.
|
||||
*/
|
||||
probed_mod = __module_text_address((unsigned long) p->addr);
|
||||
if (probed_mod) {
|
||||
/* Return -ENOENT if fail. */
|
||||
ret = -ENOENT;
|
||||
/*
|
||||
* We must hold a refcount of the probed module while updating
|
||||
* its code to prohibit unexpected unloading.
|
||||
*/
|
||||
if (unlikely(!try_module_get(probed_mod)))
|
||||
goto cannot_probe;
|
||||
|
||||
/*
|
||||
* If the module freed .init.text, we couldn't insert
|
||||
* kprobes in there.
|
||||
*/
|
||||
if (within_module_init((unsigned long)p->addr, probed_mod) &&
|
||||
probed_mod->state != MODULE_STATE_COMING) {
|
||||
module_put(probed_mod);
|
||||
goto cannot_probe;
|
||||
}
|
||||
/* ret will be updated by following code */
|
||||
}
|
||||
preempt_enable();
|
||||
jump_label_unlock();
|
||||
|
||||
p->nmissed = 0;
|
||||
INIT_LIST_HEAD(&p->list);
|
||||
|
||||
ret = check_kprobe_address_safe(p, &probed_mod);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
mutex_lock(&kprobe_mutex);
|
||||
|
||||
jump_label_lock(); /* needed to call jump_label_text_reserved() */
|
||||
|
||||
get_online_cpus(); /* For avoiding text_mutex deadlock. */
|
||||
mutex_lock(&text_mutex);
|
||||
|
||||
old_p = get_kprobe(p->addr);
|
||||
if (old_p) {
|
||||
/* Since this may unoptimize old_p, locking text_mutex. */
|
||||
@ -1386,7 +1500,9 @@ int __kprobes register_kprobe(struct kprobe *p)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = arch_prepare_kprobe(p);
|
||||
mutex_lock(&text_mutex); /* Avoiding text modification */
|
||||
ret = prepare_kprobe(p);
|
||||
mutex_unlock(&text_mutex);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
@ -1395,26 +1511,18 @@ int __kprobes register_kprobe(struct kprobe *p)
|
||||
&kprobe_table[hash_ptr(p->addr, KPROBE_HASH_BITS)]);
|
||||
|
||||
if (!kprobes_all_disarmed && !kprobe_disabled(p))
|
||||
__arm_kprobe(p);
|
||||
arm_kprobe(p);
|
||||
|
||||
/* Try to optimize kprobe */
|
||||
try_to_optimize_kprobe(p);
|
||||
|
||||
out:
|
||||
mutex_unlock(&text_mutex);
|
||||
put_online_cpus();
|
||||
jump_label_unlock();
|
||||
mutex_unlock(&kprobe_mutex);
|
||||
|
||||
if (probed_mod)
|
||||
module_put(probed_mod);
|
||||
|
||||
return ret;
|
||||
|
||||
cannot_probe:
|
||||
preempt_enable();
|
||||
jump_label_unlock();
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(register_kprobe);
|
||||
|
||||
@ -1451,7 +1559,7 @@ static struct kprobe *__kprobes __disable_kprobe(struct kprobe *p)
|
||||
|
||||
/* Try to disarm and disable this/parent probe */
|
||||
if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
|
||||
disarm_kprobe(orig_p);
|
||||
disarm_kprobe(orig_p, true);
|
||||
orig_p->flags |= KPROBE_FLAG_DISABLED;
|
||||
}
|
||||
}
|
||||
@ -2049,10 +2157,11 @@ static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p,
|
||||
|
||||
if (!pp)
|
||||
pp = p;
|
||||
seq_printf(pi, "%s%s%s\n",
|
||||
seq_printf(pi, "%s%s%s%s\n",
|
||||
(kprobe_gone(p) ? "[GONE]" : ""),
|
||||
((kprobe_disabled(p) && !kprobe_gone(p)) ? "[DISABLED]" : ""),
|
||||
(kprobe_optimized(pp) ? "[OPTIMIZED]" : ""));
|
||||
(kprobe_optimized(pp) ? "[OPTIMIZED]" : ""),
|
||||
(kprobe_ftrace(pp) ? "[FTRACE]" : ""));
|
||||
}
|
||||
|
||||
static void __kprobes *kprobe_seq_start(struct seq_file *f, loff_t *pos)
|
||||
@ -2131,14 +2240,12 @@ static void __kprobes arm_all_kprobes(void)
|
||||
goto already_enabled;
|
||||
|
||||
/* Arming kprobes doesn't optimize kprobe itself */
|
||||
mutex_lock(&text_mutex);
|
||||
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
|
||||
head = &kprobe_table[i];
|
||||
hlist_for_each_entry_rcu(p, node, head, hlist)
|
||||
if (!kprobe_disabled(p))
|
||||
__arm_kprobe(p);
|
||||
arm_kprobe(p);
|
||||
}
|
||||
mutex_unlock(&text_mutex);
|
||||
|
||||
kprobes_all_disarmed = false;
|
||||
printk(KERN_INFO "Kprobes globally enabled\n");
|
||||
@ -2166,15 +2273,13 @@ static void __kprobes disarm_all_kprobes(void)
|
||||
kprobes_all_disarmed = true;
|
||||
printk(KERN_INFO "Kprobes globally disabled\n");
|
||||
|
||||
mutex_lock(&text_mutex);
|
||||
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
|
||||
head = &kprobe_table[i];
|
||||
hlist_for_each_entry_rcu(p, node, head, hlist) {
|
||||
if (!arch_trampoline_kprobe(p) && !kprobe_disabled(p))
|
||||
__disarm_kprobe(p, false);
|
||||
disarm_kprobe(p, false);
|
||||
}
|
||||
}
|
||||
mutex_unlock(&text_mutex);
|
||||
mutex_unlock(&kprobe_mutex);
|
||||
|
||||
/* Wait for disarming all kprobes by optimizer */
|
||||
|
@ -49,6 +49,11 @@ config HAVE_SYSCALL_TRACEPOINTS
|
||||
help
|
||||
See Documentation/trace/ftrace-design.txt
|
||||
|
||||
config HAVE_FENTRY
|
||||
bool
|
||||
help
|
||||
Arch supports the gcc options -pg with -mfentry
|
||||
|
||||
config HAVE_C_RECORDMCOUNT
|
||||
bool
|
||||
help
|
||||
@ -57,8 +62,12 @@ config HAVE_C_RECORDMCOUNT
|
||||
config TRACER_MAX_TRACE
|
||||
bool
|
||||
|
||||
config TRACE_CLOCK
|
||||
bool
|
||||
|
||||
config RING_BUFFER
|
||||
bool
|
||||
select TRACE_CLOCK
|
||||
|
||||
config FTRACE_NMI_ENTER
|
||||
bool
|
||||
@ -109,6 +118,7 @@ config TRACING
|
||||
select NOP_TRACER
|
||||
select BINARY_PRINTF
|
||||
select EVENT_TRACING
|
||||
select TRACE_CLOCK
|
||||
|
||||
config GENERIC_TRACER
|
||||
bool
|
||||
|
@ -5,10 +5,12 @@ ifdef CONFIG_FUNCTION_TRACER
|
||||
ORIG_CFLAGS := $(KBUILD_CFLAGS)
|
||||
KBUILD_CFLAGS = $(subst -pg,,$(ORIG_CFLAGS))
|
||||
|
||||
ifdef CONFIG_FTRACE_SELFTEST
|
||||
# selftest needs instrumentation
|
||||
CFLAGS_trace_selftest_dynamic.o = -pg
|
||||
obj-y += trace_selftest_dynamic.o
|
||||
endif
|
||||
endif
|
||||
|
||||
# If unlikely tracing is enabled, do not trace these files
|
||||
ifdef CONFIG_TRACING_BRANCHES
|
||||
@ -17,11 +19,7 @@ endif
|
||||
|
||||
CFLAGS_trace_events_filter.o := -I$(src)
|
||||
|
||||
#
|
||||
# Make the trace clocks available generally: it's infrastructure
|
||||
# relied on by ptrace for example:
|
||||
#
|
||||
obj-y += trace_clock.o
|
||||
obj-$(CONFIG_TRACE_CLOCK) += trace_clock.o
|
||||
|
||||
obj-$(CONFIG_FUNCTION_TRACER) += libftrace.o
|
||||
obj-$(CONFIG_RING_BUFFER) += ring_buffer.o
|
||||
|
@ -64,12 +64,20 @@
|
||||
|
||||
#define FL_GLOBAL_CONTROL_MASK (FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_CONTROL)
|
||||
|
||||
static struct ftrace_ops ftrace_list_end __read_mostly = {
|
||||
.func = ftrace_stub,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
/* ftrace_enabled is a method to turn ftrace on or off */
|
||||
int ftrace_enabled __read_mostly;
|
||||
static int last_ftrace_enabled;
|
||||
|
||||
/* Quick disabling of function tracer. */
|
||||
int function_trace_stop;
|
||||
int function_trace_stop __read_mostly;
|
||||
|
||||
/* Current function tracing op */
|
||||
struct ftrace_ops *function_trace_op __read_mostly = &ftrace_list_end;
|
||||
|
||||
/* List for set_ftrace_pid's pids. */
|
||||
LIST_HEAD(ftrace_pids);
|
||||
@ -86,22 +94,43 @@ static int ftrace_disabled __read_mostly;
|
||||
|
||||
static DEFINE_MUTEX(ftrace_lock);
|
||||
|
||||
static struct ftrace_ops ftrace_list_end __read_mostly = {
|
||||
.func = ftrace_stub,
|
||||
};
|
||||
|
||||
static struct ftrace_ops *ftrace_global_list __read_mostly = &ftrace_list_end;
|
||||
static struct ftrace_ops *ftrace_control_list __read_mostly = &ftrace_list_end;
|
||||
static struct ftrace_ops *ftrace_ops_list __read_mostly = &ftrace_list_end;
|
||||
ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
|
||||
static ftrace_func_t __ftrace_trace_function_delay __read_mostly = ftrace_stub;
|
||||
ftrace_func_t __ftrace_trace_function __read_mostly = ftrace_stub;
|
||||
ftrace_func_t ftrace_pid_function __read_mostly = ftrace_stub;
|
||||
static struct ftrace_ops global_ops;
|
||||
static struct ftrace_ops control_ops;
|
||||
|
||||
static void
|
||||
ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip);
|
||||
#if ARCH_SUPPORTS_FTRACE_OPS
|
||||
static void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *regs);
|
||||
#else
|
||||
/* See comment below, where ftrace_ops_list_func is defined */
|
||||
static void ftrace_ops_no_ops(unsigned long ip, unsigned long parent_ip);
|
||||
#define ftrace_ops_list_func ((ftrace_func_t)ftrace_ops_no_ops)
|
||||
#endif
|
||||
|
||||
/**
|
||||
* ftrace_nr_registered_ops - return number of ops registered
|
||||
*
|
||||
* Returns the number of ftrace_ops registered and tracing functions
|
||||
*/
|
||||
int ftrace_nr_registered_ops(void)
|
||||
{
|
||||
struct ftrace_ops *ops;
|
||||
int cnt = 0;
|
||||
|
||||
mutex_lock(&ftrace_lock);
|
||||
|
||||
for (ops = ftrace_ops_list;
|
||||
ops != &ftrace_list_end; ops = ops->next)
|
||||
cnt++;
|
||||
|
||||
mutex_unlock(&ftrace_lock);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/*
|
||||
* Traverse the ftrace_global_list, invoking all entries. The reason that we
|
||||
@ -112,29 +141,29 @@ ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip);
|
||||
*
|
||||
* Silly Alpha and silly pointer-speculation compiler optimizations!
|
||||
*/
|
||||
static void ftrace_global_list_func(unsigned long ip,
|
||||
unsigned long parent_ip)
|
||||
static void
|
||||
ftrace_global_list_func(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *regs)
|
||||
{
|
||||
struct ftrace_ops *op;
|
||||
|
||||
if (unlikely(trace_recursion_test(TRACE_GLOBAL_BIT)))
|
||||
return;
|
||||
|
||||
trace_recursion_set(TRACE_GLOBAL_BIT);
|
||||
op = rcu_dereference_raw(ftrace_global_list); /*see above*/
|
||||
while (op != &ftrace_list_end) {
|
||||
op->func(ip, parent_ip);
|
||||
op->func(ip, parent_ip, op, regs);
|
||||
op = rcu_dereference_raw(op->next); /*see above*/
|
||||
};
|
||||
trace_recursion_clear(TRACE_GLOBAL_BIT);
|
||||
}
|
||||
|
||||
static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip)
|
||||
static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *regs)
|
||||
{
|
||||
if (!test_tsk_trace_trace(current))
|
||||
return;
|
||||
|
||||
ftrace_pid_function(ip, parent_ip);
|
||||
ftrace_pid_function(ip, parent_ip, op, regs);
|
||||
}
|
||||
|
||||
static void set_ftrace_pid_function(ftrace_func_t func)
|
||||
@ -153,25 +182,9 @@ static void set_ftrace_pid_function(ftrace_func_t func)
|
||||
void clear_ftrace_function(void)
|
||||
{
|
||||
ftrace_trace_function = ftrace_stub;
|
||||
__ftrace_trace_function = ftrace_stub;
|
||||
__ftrace_trace_function_delay = ftrace_stub;
|
||||
ftrace_pid_function = ftrace_stub;
|
||||
}
|
||||
|
||||
#ifndef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
|
||||
/*
|
||||
* For those archs that do not test ftrace_trace_stop in their
|
||||
* mcount call site, we need to do it from C.
|
||||
*/
|
||||
static void ftrace_test_stop_func(unsigned long ip, unsigned long parent_ip)
|
||||
{
|
||||
if (function_trace_stop)
|
||||
return;
|
||||
|
||||
__ftrace_trace_function(ip, parent_ip);
|
||||
}
|
||||
#endif
|
||||
|
||||
static void control_ops_disable_all(struct ftrace_ops *ops)
|
||||
{
|
||||
int cpu;
|
||||
@ -230,28 +243,27 @@ static void update_ftrace_function(void)
|
||||
|
||||
/*
|
||||
* If we are at the end of the list and this ops is
|
||||
* not dynamic, then have the mcount trampoline call
|
||||
* the function directly
|
||||
* recursion safe and not dynamic and the arch supports passing ops,
|
||||
* then have the mcount trampoline call the function directly.
|
||||
*/
|
||||
if (ftrace_ops_list == &ftrace_list_end ||
|
||||
(ftrace_ops_list->next == &ftrace_list_end &&
|
||||
!(ftrace_ops_list->flags & FTRACE_OPS_FL_DYNAMIC)))
|
||||
!(ftrace_ops_list->flags & FTRACE_OPS_FL_DYNAMIC) &&
|
||||
(ftrace_ops_list->flags & FTRACE_OPS_FL_RECURSION_SAFE) &&
|
||||
!FTRACE_FORCE_LIST_FUNC)) {
|
||||
/* Set the ftrace_ops that the arch callback uses */
|
||||
if (ftrace_ops_list == &global_ops)
|
||||
function_trace_op = ftrace_global_list;
|
||||
else
|
||||
function_trace_op = ftrace_ops_list;
|
||||
func = ftrace_ops_list->func;
|
||||
else
|
||||
} else {
|
||||
/* Just use the default ftrace_ops */
|
||||
function_trace_op = &ftrace_list_end;
|
||||
func = ftrace_ops_list_func;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
|
||||
ftrace_trace_function = func;
|
||||
#else
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
/* do not update till all functions have been modified */
|
||||
__ftrace_trace_function_delay = func;
|
||||
#else
|
||||
__ftrace_trace_function = func;
|
||||
#endif
|
||||
ftrace_trace_function =
|
||||
(func == ftrace_stub) ? func : ftrace_test_stop_func;
|
||||
#endif
|
||||
}
|
||||
|
||||
static void add_ftrace_ops(struct ftrace_ops **list, struct ftrace_ops *ops)
|
||||
@ -325,6 +337,20 @@ static int __register_ftrace_function(struct ftrace_ops *ops)
|
||||
if ((ops->flags & FL_GLOBAL_CONTROL_MASK) == FL_GLOBAL_CONTROL_MASK)
|
||||
return -EINVAL;
|
||||
|
||||
#ifndef ARCH_SUPPORTS_FTRACE_SAVE_REGS
|
||||
/*
|
||||
* If the ftrace_ops specifies SAVE_REGS, then it only can be used
|
||||
* if the arch supports it, or SAVE_REGS_IF_SUPPORTED is also set.
|
||||
* Setting SAVE_REGS_IF_SUPPORTED makes SAVE_REGS irrelevant.
|
||||
*/
|
||||
if (ops->flags & FTRACE_OPS_FL_SAVE_REGS &&
|
||||
!(ops->flags & FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED))
|
||||
return -EINVAL;
|
||||
|
||||
if (ops->flags & FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED)
|
||||
ops->flags |= FTRACE_OPS_FL_SAVE_REGS;
|
||||
#endif
|
||||
|
||||
if (!core_kernel_data((unsigned long)ops))
|
||||
ops->flags |= FTRACE_OPS_FL_DYNAMIC;
|
||||
|
||||
@ -773,7 +799,8 @@ ftrace_profile_alloc(struct ftrace_profile_stat *stat, unsigned long ip)
|
||||
}
|
||||
|
||||
static void
|
||||
function_profile_call(unsigned long ip, unsigned long parent_ip)
|
||||
function_profile_call(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *ops, struct pt_regs *regs)
|
||||
{
|
||||
struct ftrace_profile_stat *stat;
|
||||
struct ftrace_profile *rec;
|
||||
@ -803,7 +830,7 @@ function_profile_call(unsigned long ip, unsigned long parent_ip)
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
static int profile_graph_entry(struct ftrace_graph_ent *trace)
|
||||
{
|
||||
function_profile_call(trace->func, 0);
|
||||
function_profile_call(trace->func, 0, NULL, NULL);
|
||||
return 1;
|
||||
}
|
||||
|
||||
@ -863,6 +890,7 @@ static void unregister_ftrace_profiler(void)
|
||||
#else
|
||||
static struct ftrace_ops ftrace_profile_ops __read_mostly = {
|
||||
.func = function_profile_call,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static int register_ftrace_profiler(void)
|
||||
@ -1045,6 +1073,7 @@ static struct ftrace_ops global_ops = {
|
||||
.func = ftrace_stub,
|
||||
.notrace_hash = EMPTY_HASH,
|
||||
.filter_hash = EMPTY_HASH,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static DEFINE_MUTEX(ftrace_regex_lock);
|
||||
@ -1525,6 +1554,12 @@ static void __ftrace_hash_rec_update(struct ftrace_ops *ops,
|
||||
rec->flags++;
|
||||
if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == FTRACE_REF_MAX))
|
||||
return;
|
||||
/*
|
||||
* If any ops wants regs saved for this function
|
||||
* then all ops will get saved regs.
|
||||
*/
|
||||
if (ops->flags & FTRACE_OPS_FL_SAVE_REGS)
|
||||
rec->flags |= FTRACE_FL_REGS;
|
||||
} else {
|
||||
if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == 0))
|
||||
return;
|
||||
@ -1616,18 +1651,59 @@ static int ftrace_check_record(struct dyn_ftrace *rec, int enable, int update)
|
||||
if (enable && (rec->flags & ~FTRACE_FL_MASK))
|
||||
flag = FTRACE_FL_ENABLED;
|
||||
|
||||
/*
|
||||
* If enabling and the REGS flag does not match the REGS_EN, then
|
||||
* do not ignore this record. Set flags to fail the compare against
|
||||
* ENABLED.
|
||||
*/
|
||||
if (flag &&
|
||||
(!(rec->flags & FTRACE_FL_REGS) != !(rec->flags & FTRACE_FL_REGS_EN)))
|
||||
flag |= FTRACE_FL_REGS;
|
||||
|
||||
/* If the state of this record hasn't changed, then do nothing */
|
||||
if ((rec->flags & FTRACE_FL_ENABLED) == flag)
|
||||
return FTRACE_UPDATE_IGNORE;
|
||||
|
||||
if (flag) {
|
||||
if (update)
|
||||
/* Save off if rec is being enabled (for return value) */
|
||||
flag ^= rec->flags & FTRACE_FL_ENABLED;
|
||||
|
||||
if (update) {
|
||||
rec->flags |= FTRACE_FL_ENABLED;
|
||||
return FTRACE_UPDATE_MAKE_CALL;
|
||||
if (flag & FTRACE_FL_REGS) {
|
||||
if (rec->flags & FTRACE_FL_REGS)
|
||||
rec->flags |= FTRACE_FL_REGS_EN;
|
||||
else
|
||||
rec->flags &= ~FTRACE_FL_REGS_EN;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If this record is being updated from a nop, then
|
||||
* return UPDATE_MAKE_CALL.
|
||||
* Otherwise, if the EN flag is set, then return
|
||||
* UPDATE_MODIFY_CALL_REGS to tell the caller to convert
|
||||
* from the non-save regs, to a save regs function.
|
||||
* Otherwise,
|
||||
* return UPDATE_MODIFY_CALL to tell the caller to convert
|
||||
* from the save regs, to a non-save regs function.
|
||||
*/
|
||||
if (flag & FTRACE_FL_ENABLED)
|
||||
return FTRACE_UPDATE_MAKE_CALL;
|
||||
else if (rec->flags & FTRACE_FL_REGS_EN)
|
||||
return FTRACE_UPDATE_MODIFY_CALL_REGS;
|
||||
else
|
||||
return FTRACE_UPDATE_MODIFY_CALL;
|
||||
}
|
||||
|
||||
if (update)
|
||||
rec->flags &= ~FTRACE_FL_ENABLED;
|
||||
if (update) {
|
||||
/* If there's no more users, clear all flags */
|
||||
if (!(rec->flags & ~FTRACE_FL_MASK))
|
||||
rec->flags = 0;
|
||||
else
|
||||
/* Just disable the record (keep REGS state) */
|
||||
rec->flags &= ~FTRACE_FL_ENABLED;
|
||||
}
|
||||
|
||||
return FTRACE_UPDATE_MAKE_NOP;
|
||||
}
|
||||
@ -1662,13 +1738,17 @@ int ftrace_test_record(struct dyn_ftrace *rec, int enable)
|
||||
static int
|
||||
__ftrace_replace_code(struct dyn_ftrace *rec, int enable)
|
||||
{
|
||||
unsigned long ftrace_old_addr;
|
||||
unsigned long ftrace_addr;
|
||||
int ret;
|
||||
|
||||
ftrace_addr = (unsigned long)FTRACE_ADDR;
|
||||
|
||||
ret = ftrace_update_record(rec, enable);
|
||||
|
||||
if (rec->flags & FTRACE_FL_REGS)
|
||||
ftrace_addr = (unsigned long)FTRACE_REGS_ADDR;
|
||||
else
|
||||
ftrace_addr = (unsigned long)FTRACE_ADDR;
|
||||
|
||||
switch (ret) {
|
||||
case FTRACE_UPDATE_IGNORE:
|
||||
return 0;
|
||||
@ -1678,6 +1758,15 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
|
||||
|
||||
case FTRACE_UPDATE_MAKE_NOP:
|
||||
return ftrace_make_nop(NULL, rec, ftrace_addr);
|
||||
|
||||
case FTRACE_UPDATE_MODIFY_CALL_REGS:
|
||||
case FTRACE_UPDATE_MODIFY_CALL:
|
||||
if (rec->flags & FTRACE_FL_REGS)
|
||||
ftrace_old_addr = (unsigned long)FTRACE_ADDR;
|
||||
else
|
||||
ftrace_old_addr = (unsigned long)FTRACE_REGS_ADDR;
|
||||
|
||||
return ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
|
||||
}
|
||||
|
||||
return -1; /* unknow ftrace bug */
|
||||
@ -1882,16 +1971,6 @@ static void ftrace_run_update_code(int command)
|
||||
*/
|
||||
arch_ftrace_update_code(command);
|
||||
|
||||
#ifndef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
|
||||
/*
|
||||
* For archs that call ftrace_test_stop_func(), we must
|
||||
* wait till after we update all the function callers
|
||||
* before we update the callback. This keeps different
|
||||
* ops that record different functions from corrupting
|
||||
* each other.
|
||||
*/
|
||||
__ftrace_trace_function = __ftrace_trace_function_delay;
|
||||
#endif
|
||||
function_trace_stop--;
|
||||
|
||||
ret = ftrace_arch_code_modify_post_process();
|
||||
@ -2441,8 +2520,9 @@ static int t_show(struct seq_file *m, void *v)
|
||||
|
||||
seq_printf(m, "%ps", (void *)rec->ip);
|
||||
if (iter->flags & FTRACE_ITER_ENABLED)
|
||||
seq_printf(m, " (%ld)",
|
||||
rec->flags & ~FTRACE_FL_MASK);
|
||||
seq_printf(m, " (%ld)%s",
|
||||
rec->flags & ~FTRACE_FL_MASK,
|
||||
rec->flags & FTRACE_FL_REGS ? " R" : "");
|
||||
seq_printf(m, "\n");
|
||||
|
||||
return 0;
|
||||
@ -2790,8 +2870,8 @@ static int __init ftrace_mod_cmd_init(void)
|
||||
}
|
||||
device_initcall(ftrace_mod_cmd_init);
|
||||
|
||||
static void
|
||||
function_trace_probe_call(unsigned long ip, unsigned long parent_ip)
|
||||
static void function_trace_probe_call(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *pt_regs)
|
||||
{
|
||||
struct ftrace_func_probe *entry;
|
||||
struct hlist_head *hhd;
|
||||
@ -3162,8 +3242,27 @@ ftrace_notrace_write(struct file *file, const char __user *ubuf,
|
||||
}
|
||||
|
||||
static int
|
||||
ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len,
|
||||
int reset, int enable)
|
||||
ftrace_match_addr(struct ftrace_hash *hash, unsigned long ip, int remove)
|
||||
{
|
||||
struct ftrace_func_entry *entry;
|
||||
|
||||
if (!ftrace_location(ip))
|
||||
return -EINVAL;
|
||||
|
||||
if (remove) {
|
||||
entry = ftrace_lookup_ip(hash, ip);
|
||||
if (!entry)
|
||||
return -ENOENT;
|
||||
free_hash_entry(hash, entry);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return add_hash_entry(hash, ip);
|
||||
}
|
||||
|
||||
static int
|
||||
ftrace_set_hash(struct ftrace_ops *ops, unsigned char *buf, int len,
|
||||
unsigned long ip, int remove, int reset, int enable)
|
||||
{
|
||||
struct ftrace_hash **orig_hash;
|
||||
struct ftrace_hash *hash;
|
||||
@ -3192,6 +3291,11 @@ ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len,
|
||||
ret = -EINVAL;
|
||||
goto out_regex_unlock;
|
||||
}
|
||||
if (ip) {
|
||||
ret = ftrace_match_addr(hash, ip, remove);
|
||||
if (ret < 0)
|
||||
goto out_regex_unlock;
|
||||
}
|
||||
|
||||
mutex_lock(&ftrace_lock);
|
||||
ret = ftrace_hash_move(ops, enable, orig_hash, hash);
|
||||
@ -3208,6 +3312,37 @@ ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
ftrace_set_addr(struct ftrace_ops *ops, unsigned long ip, int remove,
|
||||
int reset, int enable)
|
||||
{
|
||||
return ftrace_set_hash(ops, 0, 0, ip, remove, reset, enable);
|
||||
}
|
||||
|
||||
/**
|
||||
* ftrace_set_filter_ip - set a function to filter on in ftrace by address
|
||||
* @ops - the ops to set the filter with
|
||||
* @ip - the address to add to or remove from the filter.
|
||||
* @remove - non zero to remove the ip from the filter
|
||||
* @reset - non zero to reset all filters before applying this filter.
|
||||
*
|
||||
* Filters denote which functions should be enabled when tracing is enabled
|
||||
* If @ip is NULL, it failes to update filter.
|
||||
*/
|
||||
int ftrace_set_filter_ip(struct ftrace_ops *ops, unsigned long ip,
|
||||
int remove, int reset)
|
||||
{
|
||||
return ftrace_set_addr(ops, ip, remove, reset, 1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ftrace_set_filter_ip);
|
||||
|
||||
static int
|
||||
ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len,
|
||||
int reset, int enable)
|
||||
{
|
||||
return ftrace_set_hash(ops, buf, len, 0, 0, reset, enable);
|
||||
}
|
||||
|
||||
/**
|
||||
* ftrace_set_filter - set a function to filter on in ftrace
|
||||
* @ops - the ops to set the filter with
|
||||
@ -3912,6 +4047,7 @@ void __init ftrace_init(void)
|
||||
|
||||
static struct ftrace_ops global_ops = {
|
||||
.func = ftrace_stub,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static int __init ftrace_nodyn_init(void)
|
||||
@ -3942,10 +4078,9 @@ ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip)
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE */
|
||||
|
||||
static void
|
||||
ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip)
|
||||
ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *regs)
|
||||
{
|
||||
struct ftrace_ops *op;
|
||||
|
||||
if (unlikely(trace_recursion_test(TRACE_CONTROL_BIT)))
|
||||
return;
|
||||
|
||||
@ -3959,7 +4094,7 @@ ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip)
|
||||
while (op != &ftrace_list_end) {
|
||||
if (!ftrace_function_local_disabled(op) &&
|
||||
ftrace_ops_test(op, ip))
|
||||
op->func(ip, parent_ip);
|
||||
op->func(ip, parent_ip, op, regs);
|
||||
|
||||
op = rcu_dereference_raw(op->next);
|
||||
};
|
||||
@ -3969,13 +4104,18 @@ ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip)
|
||||
|
||||
static struct ftrace_ops control_ops = {
|
||||
.func = ftrace_ops_control_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static void
|
||||
ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip)
|
||||
static inline void
|
||||
__ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *ignored, struct pt_regs *regs)
|
||||
{
|
||||
struct ftrace_ops *op;
|
||||
|
||||
if (function_trace_stop)
|
||||
return;
|
||||
|
||||
if (unlikely(trace_recursion_test(TRACE_INTERNAL_BIT)))
|
||||
return;
|
||||
|
||||
@ -3988,13 +4128,39 @@ ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip)
|
||||
op = rcu_dereference_raw(ftrace_ops_list);
|
||||
while (op != &ftrace_list_end) {
|
||||
if (ftrace_ops_test(op, ip))
|
||||
op->func(ip, parent_ip);
|
||||
op->func(ip, parent_ip, op, regs);
|
||||
op = rcu_dereference_raw(op->next);
|
||||
};
|
||||
preempt_enable_notrace();
|
||||
trace_recursion_clear(TRACE_INTERNAL_BIT);
|
||||
}
|
||||
|
||||
/*
|
||||
* Some archs only support passing ip and parent_ip. Even though
|
||||
* the list function ignores the op parameter, we do not want any
|
||||
* C side effects, where a function is called without the caller
|
||||
* sending a third parameter.
|
||||
* Archs are to support both the regs and ftrace_ops at the same time.
|
||||
* If they support ftrace_ops, it is assumed they support regs.
|
||||
* If call backs want to use regs, they must either check for regs
|
||||
* being NULL, or ARCH_SUPPORTS_FTRACE_SAVE_REGS.
|
||||
* Note, ARCH_SUPPORT_SAVE_REGS expects a full regs to be saved.
|
||||
* An architecture can pass partial regs with ftrace_ops and still
|
||||
* set the ARCH_SUPPORT_FTARCE_OPS.
|
||||
*/
|
||||
#if ARCH_SUPPORTS_FTRACE_OPS
|
||||
static void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *regs)
|
||||
{
|
||||
__ftrace_ops_list_func(ip, parent_ip, NULL, regs);
|
||||
}
|
||||
#else
|
||||
static void ftrace_ops_no_ops(unsigned long ip, unsigned long parent_ip)
|
||||
{
|
||||
__ftrace_ops_list_func(ip, parent_ip, NULL, NULL);
|
||||
}
|
||||
#endif
|
||||
|
||||
static void clear_ftrace_swapper(void)
|
||||
{
|
||||
struct task_struct *p;
|
||||
|
@ -2816,7 +2816,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_record_enable);
|
||||
* to the buffer after this will fail and return NULL.
|
||||
*
|
||||
* This is different than ring_buffer_record_disable() as
|
||||
* it works like an on/off switch, where as the disable() verison
|
||||
* it works like an on/off switch, where as the disable() version
|
||||
* must be paired with a enable().
|
||||
*/
|
||||
void ring_buffer_record_off(struct ring_buffer *buffer)
|
||||
@ -2839,7 +2839,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_record_off);
|
||||
* ring_buffer_record_off().
|
||||
*
|
||||
* This is different than ring_buffer_record_enable() as
|
||||
* it works like an on/off switch, where as the enable() verison
|
||||
* it works like an on/off switch, where as the enable() version
|
||||
* must be paired with a disable().
|
||||
*/
|
||||
void ring_buffer_record_on(struct ring_buffer *buffer)
|
||||
|
@ -328,7 +328,7 @@ static DECLARE_WAIT_QUEUE_HEAD(trace_wait);
|
||||
unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
|
||||
TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO | TRACE_ITER_SLEEP_TIME |
|
||||
TRACE_ITER_GRAPH_TIME | TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE |
|
||||
TRACE_ITER_IRQ_INFO;
|
||||
TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS;
|
||||
|
||||
static int trace_stop_count;
|
||||
static DEFINE_RAW_SPINLOCK(tracing_start_lock);
|
||||
@ -426,15 +426,15 @@ __setup("trace_buf_size=", set_buf_size);
|
||||
|
||||
static int __init set_tracing_thresh(char *str)
|
||||
{
|
||||
unsigned long threshhold;
|
||||
unsigned long threshold;
|
||||
int ret;
|
||||
|
||||
if (!str)
|
||||
return 0;
|
||||
ret = strict_strtoul(str, 0, &threshhold);
|
||||
ret = strict_strtoul(str, 0, &threshold);
|
||||
if (ret < 0)
|
||||
return 0;
|
||||
tracing_thresh = threshhold * 1000;
|
||||
tracing_thresh = threshold * 1000;
|
||||
return 1;
|
||||
}
|
||||
__setup("tracing_thresh=", set_tracing_thresh);
|
||||
@ -470,6 +470,7 @@ static const char *trace_options[] = {
|
||||
"overwrite",
|
||||
"disable_on_free",
|
||||
"irq-info",
|
||||
"markers",
|
||||
NULL
|
||||
};
|
||||
|
||||
@ -3886,6 +3887,9 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
|
||||
if (tracing_disabled)
|
||||
return -EINVAL;
|
||||
|
||||
if (!(trace_flags & TRACE_ITER_MARKERS))
|
||||
return -EINVAL;
|
||||
|
||||
if (cnt > TRACE_BUF_SIZE)
|
||||
cnt = TRACE_BUF_SIZE;
|
||||
|
||||
|
@ -472,11 +472,11 @@ extern void trace_find_cmdline(int pid, char comm[]);
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
extern unsigned long ftrace_update_tot_cnt;
|
||||
#endif
|
||||
#define DYN_FTRACE_TEST_NAME trace_selftest_dynamic_test_func
|
||||
extern int DYN_FTRACE_TEST_NAME(void);
|
||||
#define DYN_FTRACE_TEST_NAME2 trace_selftest_dynamic_test_func2
|
||||
extern int DYN_FTRACE_TEST_NAME2(void);
|
||||
#endif
|
||||
|
||||
extern int ring_buffer_expanded;
|
||||
extern bool tracing_selftest_disabled;
|
||||
@ -680,6 +680,7 @@ enum trace_iterator_flags {
|
||||
TRACE_ITER_OVERWRITE = 0x200000,
|
||||
TRACE_ITER_STOP_ON_FREE = 0x400000,
|
||||
TRACE_ITER_IRQ_INFO = 0x800000,
|
||||
TRACE_ITER_MARKERS = 0x1000000,
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -258,7 +258,8 @@ EXPORT_SYMBOL_GPL(perf_trace_buf_prepare);
|
||||
|
||||
#ifdef CONFIG_FUNCTION_TRACER
|
||||
static void
|
||||
perf_ftrace_function_call(unsigned long ip, unsigned long parent_ip)
|
||||
perf_ftrace_function_call(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *ops, struct pt_regs *pt_regs)
|
||||
{
|
||||
struct ftrace_entry *entry;
|
||||
struct hlist_head *head;
|
||||
|
@ -1199,6 +1199,31 @@ event_create_dir(struct ftrace_event_call *call, struct dentry *d_events,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void event_remove(struct ftrace_event_call *call)
|
||||
{
|
||||
ftrace_event_enable_disable(call, 0);
|
||||
if (call->event.funcs)
|
||||
__unregister_ftrace_event(&call->event);
|
||||
list_del(&call->list);
|
||||
}
|
||||
|
||||
static int event_init(struct ftrace_event_call *call)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
if (WARN_ON(!call->name))
|
||||
return -EINVAL;
|
||||
|
||||
if (call->class->raw_init) {
|
||||
ret = call->class->raw_init(call);
|
||||
if (ret < 0 && ret != -ENOSYS)
|
||||
pr_warn("Could not initialize trace events/%s\n",
|
||||
call->name);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
__trace_add_event_call(struct ftrace_event_call *call, struct module *mod,
|
||||
const struct file_operations *id,
|
||||
@ -1209,19 +1234,9 @@ __trace_add_event_call(struct ftrace_event_call *call, struct module *mod,
|
||||
struct dentry *d_events;
|
||||
int ret;
|
||||
|
||||
/* The linker may leave blanks */
|
||||
if (!call->name)
|
||||
return -EINVAL;
|
||||
|
||||
if (call->class->raw_init) {
|
||||
ret = call->class->raw_init(call);
|
||||
if (ret < 0) {
|
||||
if (ret != -ENOSYS)
|
||||
pr_warning("Could not initialize trace events/%s\n",
|
||||
call->name);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
ret = event_init(call);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
d_events = event_trace_events_dir();
|
||||
if (!d_events)
|
||||
@ -1272,13 +1287,10 @@ static void remove_subsystem_dir(const char *name)
|
||||
*/
|
||||
static void __trace_remove_event_call(struct ftrace_event_call *call)
|
||||
{
|
||||
ftrace_event_enable_disable(call, 0);
|
||||
if (call->event.funcs)
|
||||
__unregister_ftrace_event(&call->event);
|
||||
debugfs_remove_recursive(call->dir);
|
||||
list_del(&call->list);
|
||||
event_remove(call);
|
||||
trace_destroy_fields(call);
|
||||
destroy_preds(call);
|
||||
debugfs_remove_recursive(call->dir);
|
||||
remove_subsystem_dir(call->class->system);
|
||||
}
|
||||
|
||||
@ -1450,15 +1462,43 @@ static __init int setup_trace_event(char *str)
|
||||
}
|
||||
__setup("trace_event=", setup_trace_event);
|
||||
|
||||
static __init int event_trace_enable(void)
|
||||
{
|
||||
struct ftrace_event_call **iter, *call;
|
||||
char *buf = bootup_event_buf;
|
||||
char *token;
|
||||
int ret;
|
||||
|
||||
for_each_event(iter, __start_ftrace_events, __stop_ftrace_events) {
|
||||
|
||||
call = *iter;
|
||||
ret = event_init(call);
|
||||
if (!ret)
|
||||
list_add(&call->list, &ftrace_events);
|
||||
}
|
||||
|
||||
while (true) {
|
||||
token = strsep(&buf, ",");
|
||||
|
||||
if (!token)
|
||||
break;
|
||||
if (!*token)
|
||||
continue;
|
||||
|
||||
ret = ftrace_set_clr_event(token, 1);
|
||||
if (ret)
|
||||
pr_warn("Failed to enable trace event: %s\n", token);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static __init int event_trace_init(void)
|
||||
{
|
||||
struct ftrace_event_call **call;
|
||||
struct ftrace_event_call *call;
|
||||
struct dentry *d_tracer;
|
||||
struct dentry *entry;
|
||||
struct dentry *d_events;
|
||||
int ret;
|
||||
char *buf = bootup_event_buf;
|
||||
char *token;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (!d_tracer)
|
||||
@ -1497,24 +1537,19 @@ static __init int event_trace_init(void)
|
||||
if (trace_define_common_fields())
|
||||
pr_warning("tracing: Failed to allocate common fields");
|
||||
|
||||
for_each_event(call, __start_ftrace_events, __stop_ftrace_events) {
|
||||
__trace_add_event_call(*call, NULL, &ftrace_event_id_fops,
|
||||
/*
|
||||
* Early initialization already enabled ftrace event.
|
||||
* Now it's only necessary to create the event directory.
|
||||
*/
|
||||
list_for_each_entry(call, &ftrace_events, list) {
|
||||
|
||||
ret = event_create_dir(call, d_events,
|
||||
&ftrace_event_id_fops,
|
||||
&ftrace_enable_fops,
|
||||
&ftrace_event_filter_fops,
|
||||
&ftrace_event_format_fops);
|
||||
}
|
||||
|
||||
while (true) {
|
||||
token = strsep(&buf, ",");
|
||||
|
||||
if (!token)
|
||||
break;
|
||||
if (!*token)
|
||||
continue;
|
||||
|
||||
ret = ftrace_set_clr_event(token, 1);
|
||||
if (ret)
|
||||
pr_warning("Failed to enable trace event: %s\n", token);
|
||||
if (ret < 0)
|
||||
event_remove(call);
|
||||
}
|
||||
|
||||
ret = register_module_notifier(&trace_module_nb);
|
||||
@ -1523,6 +1558,7 @@ static __init int event_trace_init(void)
|
||||
|
||||
return 0;
|
||||
}
|
||||
core_initcall(event_trace_enable);
|
||||
fs_initcall(event_trace_init);
|
||||
|
||||
#ifdef CONFIG_FTRACE_STARTUP_TEST
|
||||
@ -1646,9 +1682,11 @@ static __init void event_trace_self_tests(void)
|
||||
event_test_stuff();
|
||||
|
||||
ret = __ftrace_set_clr_event(NULL, system->name, NULL, 0);
|
||||
if (WARN_ON_ONCE(ret))
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_warning("error disabling system %s\n",
|
||||
system->name);
|
||||
continue;
|
||||
}
|
||||
|
||||
pr_cont("OK\n");
|
||||
}
|
||||
@ -1681,7 +1719,8 @@ static __init void event_trace_self_tests(void)
|
||||
static DEFINE_PER_CPU(atomic_t, ftrace_test_event_disable);
|
||||
|
||||
static void
|
||||
function_test_events_call(unsigned long ip, unsigned long parent_ip)
|
||||
function_test_events_call(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *pt_regs)
|
||||
{
|
||||
struct ring_buffer_event *event;
|
||||
struct ring_buffer *buffer;
|
||||
@ -1720,6 +1759,7 @@ function_test_events_call(unsigned long ip, unsigned long parent_ip)
|
||||
static struct ftrace_ops trace_ops __initdata =
|
||||
{
|
||||
.func = function_test_events_call,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static __init void event_trace_self_test_with_function(void)
|
||||
|
@ -2002,7 +2002,7 @@ static int ftrace_function_set_regexp(struct ftrace_ops *ops, int filter,
|
||||
static int __ftrace_function_set_filter(int filter, char *buf, int len,
|
||||
struct function_filter_data *data)
|
||||
{
|
||||
int i, re_cnt, ret;
|
||||
int i, re_cnt, ret = -EINVAL;
|
||||
int *reset;
|
||||
char **re;
|
||||
|
||||
|
@ -49,7 +49,8 @@ static void function_trace_start(struct trace_array *tr)
|
||||
}
|
||||
|
||||
static void
|
||||
function_trace_call_preempt_only(unsigned long ip, unsigned long parent_ip)
|
||||
function_trace_call_preempt_only(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *pt_regs)
|
||||
{
|
||||
struct trace_array *tr = func_trace;
|
||||
struct trace_array_cpu *data;
|
||||
@ -84,7 +85,9 @@ enum {
|
||||
static struct tracer_flags func_flags;
|
||||
|
||||
static void
|
||||
function_trace_call(unsigned long ip, unsigned long parent_ip)
|
||||
function_trace_call(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *pt_regs)
|
||||
|
||||
{
|
||||
struct trace_array *tr = func_trace;
|
||||
struct trace_array_cpu *data;
|
||||
@ -121,7 +124,8 @@ function_trace_call(unsigned long ip, unsigned long parent_ip)
|
||||
}
|
||||
|
||||
static void
|
||||
function_stack_trace_call(unsigned long ip, unsigned long parent_ip)
|
||||
function_stack_trace_call(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *pt_regs)
|
||||
{
|
||||
struct trace_array *tr = func_trace;
|
||||
struct trace_array_cpu *data;
|
||||
@ -164,13 +168,13 @@ function_stack_trace_call(unsigned long ip, unsigned long parent_ip)
|
||||
static struct ftrace_ops trace_ops __read_mostly =
|
||||
{
|
||||
.func = function_trace_call,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static struct ftrace_ops trace_stack_ops __read_mostly =
|
||||
{
|
||||
.func = function_stack_trace_call,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static struct tracer_opt func_opts[] = {
|
||||
|
@ -143,7 +143,7 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret,
|
||||
return;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST
|
||||
#if defined(CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST) && !defined(CC_USING_FENTRY)
|
||||
/*
|
||||
* The arch may choose to record the frame pointer used
|
||||
* and check it here to make sure that it is what we expect it
|
||||
@ -154,6 +154,9 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret,
|
||||
*
|
||||
* Currently, x86_32 with optimize for size (-Os) makes the latest
|
||||
* gcc do the above.
|
||||
*
|
||||
* Note, -mfentry does not use frame pointers, and this test
|
||||
* is not needed if CC_USING_FENTRY is set.
|
||||
*/
|
||||
if (unlikely(current->ret_stack[index].fp != frame_pointer)) {
|
||||
ftrace_graph_stop();
|
||||
|
@ -136,7 +136,8 @@ static int func_prolog_dec(struct trace_array *tr,
|
||||
* irqsoff uses its own tracer function to keep the overhead down:
|
||||
*/
|
||||
static void
|
||||
irqsoff_tracer_call(unsigned long ip, unsigned long parent_ip)
|
||||
irqsoff_tracer_call(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *pt_regs)
|
||||
{
|
||||
struct trace_array *tr = irqsoff_trace;
|
||||
struct trace_array_cpu *data;
|
||||
@ -153,7 +154,7 @@ irqsoff_tracer_call(unsigned long ip, unsigned long parent_ip)
|
||||
static struct ftrace_ops trace_ops __read_mostly =
|
||||
{
|
||||
.func = irqsoff_tracer_call,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
#endif /* CONFIG_FUNCTION_TRACER */
|
||||
|
||||
|
@ -108,7 +108,8 @@ out_enable:
|
||||
* wakeup uses its own tracer function to keep the overhead down:
|
||||
*/
|
||||
static void
|
||||
wakeup_tracer_call(unsigned long ip, unsigned long parent_ip)
|
||||
wakeup_tracer_call(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *pt_regs)
|
||||
{
|
||||
struct trace_array *tr = wakeup_trace;
|
||||
struct trace_array_cpu *data;
|
||||
@ -129,7 +130,7 @@ wakeup_tracer_call(unsigned long ip, unsigned long parent_ip)
|
||||
static struct ftrace_ops trace_ops __read_mostly =
|
||||
{
|
||||
.func = wakeup_tracer_call,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
#endif /* CONFIG_FUNCTION_TRACER */
|
||||
|
||||
|
@ -103,54 +103,67 @@ static inline void warn_failed_init_tracer(struct tracer *trace, int init_ret)
|
||||
|
||||
static int trace_selftest_test_probe1_cnt;
|
||||
static void trace_selftest_test_probe1_func(unsigned long ip,
|
||||
unsigned long pip)
|
||||
unsigned long pip,
|
||||
struct ftrace_ops *op,
|
||||
struct pt_regs *pt_regs)
|
||||
{
|
||||
trace_selftest_test_probe1_cnt++;
|
||||
}
|
||||
|
||||
static int trace_selftest_test_probe2_cnt;
|
||||
static void trace_selftest_test_probe2_func(unsigned long ip,
|
||||
unsigned long pip)
|
||||
unsigned long pip,
|
||||
struct ftrace_ops *op,
|
||||
struct pt_regs *pt_regs)
|
||||
{
|
||||
trace_selftest_test_probe2_cnt++;
|
||||
}
|
||||
|
||||
static int trace_selftest_test_probe3_cnt;
|
||||
static void trace_selftest_test_probe3_func(unsigned long ip,
|
||||
unsigned long pip)
|
||||
unsigned long pip,
|
||||
struct ftrace_ops *op,
|
||||
struct pt_regs *pt_regs)
|
||||
{
|
||||
trace_selftest_test_probe3_cnt++;
|
||||
}
|
||||
|
||||
static int trace_selftest_test_global_cnt;
|
||||
static void trace_selftest_test_global_func(unsigned long ip,
|
||||
unsigned long pip)
|
||||
unsigned long pip,
|
||||
struct ftrace_ops *op,
|
||||
struct pt_regs *pt_regs)
|
||||
{
|
||||
trace_selftest_test_global_cnt++;
|
||||
}
|
||||
|
||||
static int trace_selftest_test_dyn_cnt;
|
||||
static void trace_selftest_test_dyn_func(unsigned long ip,
|
||||
unsigned long pip)
|
||||
unsigned long pip,
|
||||
struct ftrace_ops *op,
|
||||
struct pt_regs *pt_regs)
|
||||
{
|
||||
trace_selftest_test_dyn_cnt++;
|
||||
}
|
||||
|
||||
static struct ftrace_ops test_probe1 = {
|
||||
.func = trace_selftest_test_probe1_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static struct ftrace_ops test_probe2 = {
|
||||
.func = trace_selftest_test_probe2_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static struct ftrace_ops test_probe3 = {
|
||||
.func = trace_selftest_test_probe3_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static struct ftrace_ops test_global = {
|
||||
.func = trace_selftest_test_global_func,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL,
|
||||
.func = trace_selftest_test_global_func,
|
||||
.flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static void print_counts(void)
|
||||
@ -393,10 +406,253 @@ int trace_selftest_startup_dynamic_tracing(struct tracer *trace,
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int trace_selftest_recursion_cnt;
|
||||
static void trace_selftest_test_recursion_func(unsigned long ip,
|
||||
unsigned long pip,
|
||||
struct ftrace_ops *op,
|
||||
struct pt_regs *pt_regs)
|
||||
{
|
||||
/*
|
||||
* This function is registered without the recursion safe flag.
|
||||
* The ftrace infrastructure should provide the recursion
|
||||
* protection. If not, this will crash the kernel!
|
||||
*/
|
||||
trace_selftest_recursion_cnt++;
|
||||
DYN_FTRACE_TEST_NAME();
|
||||
}
|
||||
|
||||
static void trace_selftest_test_recursion_safe_func(unsigned long ip,
|
||||
unsigned long pip,
|
||||
struct ftrace_ops *op,
|
||||
struct pt_regs *pt_regs)
|
||||
{
|
||||
/*
|
||||
* We said we would provide our own recursion. By calling
|
||||
* this function again, we should recurse back into this function
|
||||
* and count again. But this only happens if the arch supports
|
||||
* all of ftrace features and nothing else is using the function
|
||||
* tracing utility.
|
||||
*/
|
||||
if (trace_selftest_recursion_cnt++)
|
||||
return;
|
||||
DYN_FTRACE_TEST_NAME();
|
||||
}
|
||||
|
||||
static struct ftrace_ops test_rec_probe = {
|
||||
.func = trace_selftest_test_recursion_func,
|
||||
};
|
||||
|
||||
static struct ftrace_ops test_recsafe_probe = {
|
||||
.func = trace_selftest_test_recursion_safe_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static int
|
||||
trace_selftest_function_recursion(void)
|
||||
{
|
||||
int save_ftrace_enabled = ftrace_enabled;
|
||||
int save_tracer_enabled = tracer_enabled;
|
||||
char *func_name;
|
||||
int len;
|
||||
int ret;
|
||||
int cnt;
|
||||
|
||||
/* The previous test PASSED */
|
||||
pr_cont("PASSED\n");
|
||||
pr_info("Testing ftrace recursion: ");
|
||||
|
||||
|
||||
/* enable tracing, and record the filter function */
|
||||
ftrace_enabled = 1;
|
||||
tracer_enabled = 1;
|
||||
|
||||
/* Handle PPC64 '.' name */
|
||||
func_name = "*" __stringify(DYN_FTRACE_TEST_NAME);
|
||||
len = strlen(func_name);
|
||||
|
||||
ret = ftrace_set_filter(&test_rec_probe, func_name, len, 1);
|
||||
if (ret) {
|
||||
pr_cont("*Could not set filter* ");
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = register_ftrace_function(&test_rec_probe);
|
||||
if (ret) {
|
||||
pr_cont("*could not register callback* ");
|
||||
goto out;
|
||||
}
|
||||
|
||||
DYN_FTRACE_TEST_NAME();
|
||||
|
||||
unregister_ftrace_function(&test_rec_probe);
|
||||
|
||||
ret = -1;
|
||||
if (trace_selftest_recursion_cnt != 1) {
|
||||
pr_cont("*callback not called once (%d)* ",
|
||||
trace_selftest_recursion_cnt);
|
||||
goto out;
|
||||
}
|
||||
|
||||
trace_selftest_recursion_cnt = 1;
|
||||
|
||||
pr_cont("PASSED\n");
|
||||
pr_info("Testing ftrace recursion safe: ");
|
||||
|
||||
ret = ftrace_set_filter(&test_recsafe_probe, func_name, len, 1);
|
||||
if (ret) {
|
||||
pr_cont("*Could not set filter* ");
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = register_ftrace_function(&test_recsafe_probe);
|
||||
if (ret) {
|
||||
pr_cont("*could not register callback* ");
|
||||
goto out;
|
||||
}
|
||||
|
||||
DYN_FTRACE_TEST_NAME();
|
||||
|
||||
unregister_ftrace_function(&test_recsafe_probe);
|
||||
|
||||
/*
|
||||
* If arch supports all ftrace features, and no other task
|
||||
* was on the list, we should be fine.
|
||||
*/
|
||||
if (!ftrace_nr_registered_ops() && !FTRACE_FORCE_LIST_FUNC)
|
||||
cnt = 2; /* Should have recursed */
|
||||
else
|
||||
cnt = 1;
|
||||
|
||||
ret = -1;
|
||||
if (trace_selftest_recursion_cnt != cnt) {
|
||||
pr_cont("*callback not called expected %d times (%d)* ",
|
||||
cnt, trace_selftest_recursion_cnt);
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
ftrace_enabled = save_ftrace_enabled;
|
||||
tracer_enabled = save_tracer_enabled;
|
||||
|
||||
return ret;
|
||||
}
|
||||
#else
|
||||
# define trace_selftest_startup_dynamic_tracing(trace, tr, func) ({ 0; })
|
||||
# define trace_selftest_function_recursion() ({ 0; })
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE */
|
||||
|
||||
static enum {
|
||||
TRACE_SELFTEST_REGS_START,
|
||||
TRACE_SELFTEST_REGS_FOUND,
|
||||
TRACE_SELFTEST_REGS_NOT_FOUND,
|
||||
} trace_selftest_regs_stat;
|
||||
|
||||
static void trace_selftest_test_regs_func(unsigned long ip,
|
||||
unsigned long pip,
|
||||
struct ftrace_ops *op,
|
||||
struct pt_regs *pt_regs)
|
||||
{
|
||||
if (pt_regs)
|
||||
trace_selftest_regs_stat = TRACE_SELFTEST_REGS_FOUND;
|
||||
else
|
||||
trace_selftest_regs_stat = TRACE_SELFTEST_REGS_NOT_FOUND;
|
||||
}
|
||||
|
||||
static struct ftrace_ops test_regs_probe = {
|
||||
.func = trace_selftest_test_regs_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_SAVE_REGS,
|
||||
};
|
||||
|
||||
static int
|
||||
trace_selftest_function_regs(void)
|
||||
{
|
||||
int save_ftrace_enabled = ftrace_enabled;
|
||||
int save_tracer_enabled = tracer_enabled;
|
||||
char *func_name;
|
||||
int len;
|
||||
int ret;
|
||||
int supported = 0;
|
||||
|
||||
#ifdef ARCH_SUPPORTS_FTRACE_SAVE_REGS
|
||||
supported = 1;
|
||||
#endif
|
||||
|
||||
/* The previous test PASSED */
|
||||
pr_cont("PASSED\n");
|
||||
pr_info("Testing ftrace regs%s: ",
|
||||
!supported ? "(no arch support)" : "");
|
||||
|
||||
/* enable tracing, and record the filter function */
|
||||
ftrace_enabled = 1;
|
||||
tracer_enabled = 1;
|
||||
|
||||
/* Handle PPC64 '.' name */
|
||||
func_name = "*" __stringify(DYN_FTRACE_TEST_NAME);
|
||||
len = strlen(func_name);
|
||||
|
||||
ret = ftrace_set_filter(&test_regs_probe, func_name, len, 1);
|
||||
/*
|
||||
* If DYNAMIC_FTRACE is not set, then we just trace all functions.
|
||||
* This test really doesn't care.
|
||||
*/
|
||||
if (ret && ret != -ENODEV) {
|
||||
pr_cont("*Could not set filter* ");
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = register_ftrace_function(&test_regs_probe);
|
||||
/*
|
||||
* Now if the arch does not support passing regs, then this should
|
||||
* have failed.
|
||||
*/
|
||||
if (!supported) {
|
||||
if (!ret) {
|
||||
pr_cont("*registered save-regs without arch support* ");
|
||||
goto out;
|
||||
}
|
||||
test_regs_probe.flags |= FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED;
|
||||
ret = register_ftrace_function(&test_regs_probe);
|
||||
}
|
||||
if (ret) {
|
||||
pr_cont("*could not register callback* ");
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
||||
DYN_FTRACE_TEST_NAME();
|
||||
|
||||
unregister_ftrace_function(&test_regs_probe);
|
||||
|
||||
ret = -1;
|
||||
|
||||
switch (trace_selftest_regs_stat) {
|
||||
case TRACE_SELFTEST_REGS_START:
|
||||
pr_cont("*callback never called* ");
|
||||
goto out;
|
||||
|
||||
case TRACE_SELFTEST_REGS_FOUND:
|
||||
if (supported)
|
||||
break;
|
||||
pr_cont("*callback received regs without arch support* ");
|
||||
goto out;
|
||||
|
||||
case TRACE_SELFTEST_REGS_NOT_FOUND:
|
||||
if (!supported)
|
||||
break;
|
||||
pr_cont("*callback received NULL regs* ");
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
ftrace_enabled = save_ftrace_enabled;
|
||||
tracer_enabled = save_tracer_enabled;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Simple verification test of ftrace function tracer.
|
||||
* Enable ftrace, sleep 1/10 second, and then read the trace
|
||||
@ -442,7 +698,14 @@ trace_selftest_startup_function(struct tracer *trace, struct trace_array *tr)
|
||||
|
||||
ret = trace_selftest_startup_dynamic_tracing(trace, tr,
|
||||
DYN_FTRACE_TEST_NAME);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = trace_selftest_function_recursion();
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = trace_selftest_function_regs();
|
||||
out:
|
||||
ftrace_enabled = save_ftrace_enabled;
|
||||
tracer_enabled = save_tracer_enabled;
|
||||
@ -778,6 +1041,8 @@ static int trace_wakeup_test_thread(void *data)
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
schedule();
|
||||
|
||||
complete(x);
|
||||
|
||||
/* we are awake, now wait to disappear */
|
||||
while (!kthread_should_stop()) {
|
||||
/*
|
||||
@ -821,24 +1086,21 @@ trace_selftest_startup_wakeup(struct tracer *trace, struct trace_array *tr)
|
||||
/* reset the max latency */
|
||||
tracing_max_latency = 0;
|
||||
|
||||
/* sleep to let the RT thread sleep too */
|
||||
msleep(100);
|
||||
while (p->on_rq) {
|
||||
/*
|
||||
* Sleep to make sure the RT thread is asleep too.
|
||||
* On virtual machines we can't rely on timings,
|
||||
* but we want to make sure this test still works.
|
||||
*/
|
||||
msleep(100);
|
||||
}
|
||||
|
||||
/*
|
||||
* Yes this is slightly racy. It is possible that for some
|
||||
* strange reason that the RT thread we created, did not
|
||||
* call schedule for 100ms after doing the completion,
|
||||
* and we do a wakeup on a task that already is awake.
|
||||
* But that is extremely unlikely, and the worst thing that
|
||||
* happens in such a case, is that we disable tracing.
|
||||
* Honestly, if this race does happen something is horrible
|
||||
* wrong with the system.
|
||||
*/
|
||||
init_completion(&isrt);
|
||||
|
||||
wake_up_process(p);
|
||||
|
||||
/* give a little time to let the thread wake up */
|
||||
msleep(100);
|
||||
/* Wait for the task to wake up */
|
||||
wait_for_completion(&isrt);
|
||||
|
||||
/* stop the tracing. */
|
||||
tracing_stop();
|
||||
|
@ -111,7 +111,8 @@ static inline void check_stack(void)
|
||||
}
|
||||
|
||||
static void
|
||||
stack_trace_call(unsigned long ip, unsigned long parent_ip)
|
||||
stack_trace_call(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct pt_regs *pt_regs)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
@ -136,6 +137,7 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip)
|
||||
static struct ftrace_ops trace_ops __read_mostly =
|
||||
{
|
||||
.func = stack_trace_call,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static ssize_t
|
||||
|
@ -487,7 +487,7 @@ int __init init_ftrace_syscalls(void)
|
||||
|
||||
return 0;
|
||||
}
|
||||
core_initcall(init_ftrace_syscalls);
|
||||
early_initcall(init_ftrace_syscalls);
|
||||
|
||||
#ifdef CONFIG_PERF_EVENTS
|
||||
|
||||
|
@ -261,11 +261,13 @@ static unsigned get_mcountsym(Elf_Sym const *const sym0,
|
||||
&sym0[Elf_r_sym(relp)];
|
||||
char const *symname = &str0[w(symp->st_name)];
|
||||
char const *mcount = gpfx == '_' ? "_mcount" : "mcount";
|
||||
char const *fentry = "__fentry__";
|
||||
|
||||
if (symname[0] == '.')
|
||||
++symname; /* ppc64 hack */
|
||||
if (strcmp(mcount, symname) == 0 ||
|
||||
(altmcount && strcmp(altmcount, symname) == 0))
|
||||
(altmcount && strcmp(altmcount, symname) == 0) ||
|
||||
(strcmp(fentry, symname) == 0))
|
||||
mcountsym = Elf_r_sym(relp);
|
||||
|
||||
return mcountsym;
|
||||
|
@ -129,7 +129,7 @@ CFLAGS ?= -g -Wall
|
||||
|
||||
# Append required CFLAGS
|
||||
override CFLAGS += $(CONFIG_FLAGS) $(INCLUDES) $(PLUGIN_DIR_SQ)
|
||||
override CFLAGS += $(udis86-flags)
|
||||
override CFLAGS += $(udis86-flags) -D_GNU_SOURCE
|
||||
|
||||
ifeq ($(VERBOSE),1)
|
||||
Q =
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -24,8 +24,8 @@
|
||||
#include <stdarg.h>
|
||||
#include <regex.h>
|
||||
|
||||
#ifndef __unused
|
||||
#define __unused __attribute__ ((unused))
|
||||
#ifndef __maybe_unused
|
||||
#define __maybe_unused __attribute__((unused))
|
||||
#endif
|
||||
|
||||
/* ----------------------- trace_seq ----------------------- */
|
||||
@ -49,7 +49,7 @@ struct pevent_record {
|
||||
int cpu;
|
||||
int ref_count;
|
||||
int locked; /* Do not free, even if ref_count is zero */
|
||||
void *private;
|
||||
void *priv;
|
||||
#if DEBUG_RECORD
|
||||
struct pevent_record *prev;
|
||||
struct pevent_record *next;
|
||||
@ -106,7 +106,7 @@ struct plugin_option {
|
||||
char *plugin_alias;
|
||||
char *description;
|
||||
char *value;
|
||||
void *private;
|
||||
void *priv;
|
||||
int set;
|
||||
};
|
||||
|
||||
@ -345,6 +345,35 @@ enum pevent_flag {
|
||||
PEVENT_NSEC_OUTPUT = 1, /* output in NSECS */
|
||||
};
|
||||
|
||||
#define PEVENT_ERRORS \
|
||||
_PE(MEM_ALLOC_FAILED, "failed to allocate memory"), \
|
||||
_PE(PARSE_EVENT_FAILED, "failed to parse event"), \
|
||||
_PE(READ_ID_FAILED, "failed to read event id"), \
|
||||
_PE(READ_FORMAT_FAILED, "failed to read event format"), \
|
||||
_PE(READ_PRINT_FAILED, "failed to read event print fmt"), \
|
||||
_PE(OLD_FTRACE_ARG_FAILED,"failed to allocate field name for ftrace"),\
|
||||
_PE(INVALID_ARG_TYPE, "invalid argument type")
|
||||
|
||||
#undef _PE
|
||||
#define _PE(__code, __str) PEVENT_ERRNO__ ## __code
|
||||
enum pevent_errno {
|
||||
PEVENT_ERRNO__SUCCESS = 0,
|
||||
|
||||
/*
|
||||
* Choose an arbitrary negative big number not to clash with standard
|
||||
* errno since SUS requires the errno has distinct positive values.
|
||||
* See 'Issue 6' in the link below.
|
||||
*
|
||||
* http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/errno.h.html
|
||||
*/
|
||||
__PEVENT_ERRNO__START = -100000,
|
||||
|
||||
PEVENT_ERRORS,
|
||||
|
||||
__PEVENT_ERRNO__END,
|
||||
};
|
||||
#undef _PE
|
||||
|
||||
struct cmdline;
|
||||
struct cmdline_list;
|
||||
struct func_map;
|
||||
@ -509,8 +538,11 @@ void pevent_print_event(struct pevent *pevent, struct trace_seq *s,
|
||||
int pevent_parse_header_page(struct pevent *pevent, char *buf, unsigned long size,
|
||||
int long_size);
|
||||
|
||||
int pevent_parse_event(struct pevent *pevent, const char *buf,
|
||||
unsigned long size, const char *sys);
|
||||
enum pevent_errno pevent_parse_event(struct pevent *pevent, const char *buf,
|
||||
unsigned long size, const char *sys);
|
||||
enum pevent_errno pevent_parse_format(struct event_format **eventp, const char *buf,
|
||||
unsigned long size, const char *sys);
|
||||
void pevent_free_format(struct event_format *event);
|
||||
|
||||
void *pevent_get_field_raw(struct trace_seq *s, struct event_format *event,
|
||||
const char *name, struct pevent_record *record,
|
||||
@ -561,6 +593,8 @@ int pevent_data_pid(struct pevent *pevent, struct pevent_record *rec);
|
||||
const char *pevent_data_comm_from_pid(struct pevent *pevent, int pid);
|
||||
void pevent_event_info(struct trace_seq *s, struct event_format *event,
|
||||
struct pevent_record *record);
|
||||
int pevent_strerror(struct pevent *pevent, enum pevent_errno errnum,
|
||||
char *buf, size_t buflen);
|
||||
|
||||
struct event_format **pevent_list_events(struct pevent *pevent, enum event_sort_type);
|
||||
struct format_field **pevent_event_common_fields(struct event_format *event);
|
||||
|
@ -39,6 +39,12 @@ void __vdie(const char *fmt, ...);
|
||||
void __vwarning(const char *fmt, ...);
|
||||
void __vpr_stat(const char *fmt, ...);
|
||||
|
||||
#define min(x, y) ({ \
|
||||
typeof(x) _min1 = (x); \
|
||||
typeof(y) _min2 = (y); \
|
||||
(void) (&_min1 == &_min2); \
|
||||
_min1 < _min2 ? _min1 : _min2; })
|
||||
|
||||
static inline char *strim(char *string)
|
||||
{
|
||||
char *ret;
|
||||
|
2
tools/perf/.gitignore
vendored
2
tools/perf/.gitignore
vendored
@ -21,3 +21,5 @@ config.mak
|
||||
config.mak.autogen
|
||||
*-bison.*
|
||||
*-flex.*
|
||||
*.pyc
|
||||
*.pyo
|
||||
|
@ -195,10 +195,10 @@ install-pdf: pdf
|
||||
#install-html: html
|
||||
# '$(SHELL_PATH_SQ)' ./install-webdoc.sh $(DESTDIR)$(htmldir)
|
||||
|
||||
../PERF-VERSION-FILE: .FORCE-PERF-VERSION-FILE
|
||||
$(QUIET_SUBDIR0)../ $(QUIET_SUBDIR1) PERF-VERSION-FILE
|
||||
$(OUTPUT)PERF-VERSION-FILE: .FORCE-PERF-VERSION-FILE
|
||||
$(QUIET_SUBDIR0)../ $(QUIET_SUBDIR1) $(OUTPUT)PERF-VERSION-FILE
|
||||
|
||||
-include ../PERF-VERSION-FILE
|
||||
-include $(OUTPUT)PERF-VERSION-FILE
|
||||
|
||||
#
|
||||
# Determine "include::" file references in asciidoc files.
|
||||
|
15
tools/perf/Documentation/jit-interface.txt
Normal file
15
tools/perf/Documentation/jit-interface.txt
Normal file
@ -0,0 +1,15 @@
|
||||
perf supports a simple JIT interface to resolve symbols for dynamic code generated
|
||||
by a JIT.
|
||||
|
||||
The JIT has to write a /tmp/perf-%d.map (%d = pid of process) file
|
||||
|
||||
This is a text file.
|
||||
|
||||
Each line has the following format, fields separated with spaces:
|
||||
|
||||
START SIZE symbolname
|
||||
|
||||
START and SIZE are hex numbers without 0x.
|
||||
symbolname is the rest of the line, so it could contain special characters.
|
||||
|
||||
The ownership of the file has to match the process.
|
@ -85,6 +85,9 @@ OPTIONS
|
||||
-M::
|
||||
--disassembler-style=:: Set disassembler style for objdump.
|
||||
|
||||
--objdump=<path>::
|
||||
Path to objdump binary.
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
linkperf:perf-record[1], linkperf:perf-report[1]
|
||||
|
@ -17,6 +17,9 @@ captured via perf record.
|
||||
|
||||
If no parameters are passed it will assume perf.data.old and perf.data.
|
||||
|
||||
The differential profile is displayed only for events matching both
|
||||
specified perf.data files.
|
||||
|
||||
OPTIONS
|
||||
-------
|
||||
-M::
|
||||
|
@ -12,7 +12,7 @@ SYNOPSIS
|
||||
[--guestkallsyms=<path> --guestmodules=<path> | --guestvmlinux=<path>]]
|
||||
{top|record|report|diff|buildid-list}
|
||||
'perf kvm' [--host] [--guest] [--guestkallsyms=<path> --guestmodules=<path>
|
||||
| --guestvmlinux=<path>] {top|record|report|diff|buildid-list}
|
||||
| --guestvmlinux=<path>] {top|record|report|diff|buildid-list|stat}
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
@ -38,6 +38,18 @@ There are a couple of variants of perf kvm:
|
||||
so that other tools can be used to fetch packages with matching symbol tables
|
||||
for use by perf report.
|
||||
|
||||
'perf kvm stat <command>' to run a command and gather performance counter
|
||||
statistics.
|
||||
Especially, perf 'kvm stat record/report' generates a statistical analysis
|
||||
of KVM events. Currently, vmexit, mmio and ioport events are supported.
|
||||
'perf kvm stat record <command>' records kvm events and the events between
|
||||
start and end <command>.
|
||||
And this command produces a file which contains tracing results of kvm
|
||||
events.
|
||||
|
||||
'perf kvm stat report' reports statistical data which includes events
|
||||
handled time, samples, and so on.
|
||||
|
||||
OPTIONS
|
||||
-------
|
||||
-i::
|
||||
@ -68,7 +80,21 @@ OPTIONS
|
||||
--guestvmlinux=<path>::
|
||||
Guest os kernel vmlinux.
|
||||
|
||||
STAT REPORT OPTIONS
|
||||
-------------------
|
||||
--vcpu=<value>::
|
||||
analyze events which occures on this vcpu. (default: all vcpus)
|
||||
|
||||
--events=<value>::
|
||||
events to be analyzed. Possible values: vmexit, mmio, ioport.
|
||||
(default: vmexit)
|
||||
-k::
|
||||
--key=<value>::
|
||||
Sorting key. Possible values: sample (default, sort by samples
|
||||
number), time (sort by average time).
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
linkperf:perf-top[1], linkperf:perf-record[1], linkperf:perf-report[1],
|
||||
linkperf:perf-diff[1], linkperf:perf-buildid-list[1]
|
||||
linkperf:perf-diff[1], linkperf:perf-buildid-list[1],
|
||||
linkperf:perf-stat[1]
|
||||
|
@ -15,24 +15,43 @@ DESCRIPTION
|
||||
This command displays the symbolic event types which can be selected in the
|
||||
various perf commands with the -e option.
|
||||
|
||||
[[EVENT_MODIFIERS]]
|
||||
EVENT MODIFIERS
|
||||
---------------
|
||||
|
||||
Events can optionally have a modifer by appending a colon and one or
|
||||
more modifiers. Modifiers allow the user to restrict when events are
|
||||
counted with 'u' for user-space, 'k' for kernel, 'h' for hypervisor.
|
||||
Additional modifiers are 'G' for guest counting (in KVM guests) and 'H'
|
||||
for host counting (not in KVM guests).
|
||||
more modifiers. Modifiers allow the user to restrict the events to be
|
||||
counted. The following modifiers exist:
|
||||
|
||||
u - user-space counting
|
||||
k - kernel counting
|
||||
h - hypervisor counting
|
||||
G - guest counting (in KVM guests)
|
||||
H - host counting (not in KVM guests)
|
||||
p - precise level
|
||||
|
||||
The 'p' modifier can be used for specifying how precise the instruction
|
||||
address should be. The 'p' modifier is currently only implemented for
|
||||
Intel PEBS and can be specified multiple times:
|
||||
0 - SAMPLE_IP can have arbitrary skid
|
||||
1 - SAMPLE_IP must have constant skid
|
||||
2 - SAMPLE_IP requested to have 0 skid
|
||||
3 - SAMPLE_IP must have 0 skid
|
||||
address should be. The 'p' modifier can be specified multiple times:
|
||||
|
||||
The PEBS implementation now supports up to 2.
|
||||
0 - SAMPLE_IP can have arbitrary skid
|
||||
1 - SAMPLE_IP must have constant skid
|
||||
2 - SAMPLE_IP requested to have 0 skid
|
||||
3 - SAMPLE_IP must have 0 skid
|
||||
|
||||
For Intel systems precise event sampling is implemented with PEBS
|
||||
which supports up to precise-level 2.
|
||||
|
||||
On AMD systems it is implemented using IBS (up to precise-level 2).
|
||||
The precise modifier works with event types 0x76 (cpu-cycles, CPU
|
||||
clocks not halted) and 0xC1 (micro-ops retired). Both events map to
|
||||
IBS execution sampling (IBS op) with the IBS Op Counter Control bit
|
||||
(IbsOpCntCtl) set respectively (see AMD64 Architecture Programmer’s
|
||||
Manual Volume 2: System Programming, 13.3 Instruction-Based
|
||||
Sampling). Examples to use IBS:
|
||||
|
||||
perf record -a -e cpu-cycles:p ... # use ibs op counting cycles
|
||||
perf record -a -e r076:p ... # same as -e cpu-cycles:p
|
||||
perf record -a -e r0C1:p ... # use ibs op counting micro-ops
|
||||
|
||||
RAW HARDWARE EVENT DESCRIPTOR
|
||||
-----------------------------
|
||||
@ -44,6 +63,11 @@ layout of IA32_PERFEVTSELx MSRs (see [Intel® 64 and IA-32 Architectures Softwar
|
||||
of IA32_PERFEVTSELx MSRs) or AMD's PerfEvtSeln (see [AMD64 Architecture Programmer’s Manual Volume 2: System Programming], Page 344,
|
||||
Figure 13-7 Performance Event-Select Register (PerfEvtSeln)).
|
||||
|
||||
Note: Only the following bit fields can be set in x86 counter
|
||||
registers: event, umask, edge, inv, cmask. Esp. guest/host only and
|
||||
OS/user mode flags must be setup using <<EVENT_MODIFIERS, EVENT
|
||||
MODIFIERS>>.
|
||||
|
||||
Example:
|
||||
|
||||
If the Intel docs for a QM720 Core i7 describe an event as:
|
||||
@ -91,4 +115,4 @@ SEE ALSO
|
||||
linkperf:perf-stat[1], linkperf:perf-top[1],
|
||||
linkperf:perf-record[1],
|
||||
http://www.intel.com/Assets/PDF/manual/253669.pdf[Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 3B: System Programming Guide],
|
||||
http://support.amd.com/us/Processor_TechDocs/24593.pdf[AMD64 Architecture Programmer’s Manual Volume 2: System Programming]
|
||||
http://support.amd.com/us/Processor_TechDocs/24593_APM_v2.pdf[AMD64 Architecture Programmer’s Manual Volume 2: System Programming]
|
||||
|
@ -168,6 +168,9 @@ OPTIONS
|
||||
branch stacks and it will automatically switch to the branch view mode,
|
||||
unless --no-branch-stack is used.
|
||||
|
||||
--objdump=<path>::
|
||||
Path to objdump binary.
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
linkperf:perf-stat[1], linkperf:perf-annotate[1]
|
||||
|
@ -116,8 +116,8 @@ search path and 'use'ing a few support modules (see module
|
||||
descriptions below):
|
||||
|
||||
----
|
||||
use lib "$ENV{'PERF_EXEC_PATH'}/scripts/perl/perf-script-Util/lib";
|
||||
use lib "./perf-script-Util/lib";
|
||||
use lib "$ENV{'PERF_EXEC_PATH'}/scripts/perl/Perf-Trace-Util/lib";
|
||||
use lib "./Perf-Trace-Util/lib";
|
||||
use Perf::Trace::Core;
|
||||
use Perf::Trace::Context;
|
||||
use Perf::Trace::Util;
|
||||
|
@ -129,7 +129,7 @@ import os
|
||||
import sys
|
||||
|
||||
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
|
||||
'/scripts/python/perf-script-Util/lib/Perf/Trace')
|
||||
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
|
||||
|
||||
from perf_trace_context import *
|
||||
from Core import *
|
||||
@ -216,7 +216,7 @@ import os
|
||||
import sys
|
||||
|
||||
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
|
||||
'/scripts/python/perf-script-Util/lib/Perf/Trace')
|
||||
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
|
||||
|
||||
from perf_trace_context import *
|
||||
from Core import *
|
||||
@ -279,7 +279,7 @@ import os
|
||||
import sys
|
||||
|
||||
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
|
||||
'/scripts/python/perf-script-Util/lib/Perf/Trace')
|
||||
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
|
||||
|
||||
from perf_trace_context import *
|
||||
from Core import *
|
||||
@ -391,7 +391,7 @@ drwxr-xr-x 4 trz trz 4096 2010-01-26 22:30 .
|
||||
drwxr-xr-x 4 trz trz 4096 2010-01-26 22:29 ..
|
||||
drwxr-xr-x 2 trz trz 4096 2010-01-26 22:29 bin
|
||||
-rw-r--r-- 1 trz trz 2548 2010-01-26 22:29 check-perf-script.py
|
||||
drwxr-xr-x 3 trz trz 4096 2010-01-26 22:49 perf-script-Util
|
||||
drwxr-xr-x 3 trz trz 4096 2010-01-26 22:49 Perf-Trace-Util
|
||||
-rw-r--r-- 1 trz trz 1462 2010-01-26 22:30 syscall-counts.py
|
||||
----
|
||||
|
||||
@ -518,7 +518,7 @@ descriptions below):
|
||||
import sys
|
||||
|
||||
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
|
||||
'/scripts/python/perf-script-Util/lib/Perf/Trace')
|
||||
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
|
||||
|
||||
from perf_trace_context import *
|
||||
from Core import *
|
||||
|
53
tools/perf/Documentation/perf-trace.txt
Normal file
53
tools/perf/Documentation/perf-trace.txt
Normal file
@ -0,0 +1,53 @@
|
||||
perf-trace(1)
|
||||
=============
|
||||
|
||||
NAME
|
||||
----
|
||||
perf-trace - strace inspired tool
|
||||
|
||||
SYNOPSIS
|
||||
--------
|
||||
[verse]
|
||||
'perf trace'
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
This command will show the events associated with the target, initially
|
||||
syscalls, but other system events like pagefaults, task lifetime events,
|
||||
scheduling events, etc.
|
||||
|
||||
Initially this is a live mode only tool, but eventually will work with
|
||||
perf.data files like the other tools, allowing a detached 'record' from
|
||||
analysis phases.
|
||||
|
||||
OPTIONS
|
||||
-------
|
||||
|
||||
--all-cpus::
|
||||
System-wide collection from all CPUs.
|
||||
|
||||
-p::
|
||||
--pid=::
|
||||
Record events on existing process ID (comma separated list).
|
||||
|
||||
--tid=::
|
||||
Record events on existing thread ID (comma separated list).
|
||||
|
||||
--uid=::
|
||||
Record events in threads owned by uid. Name or number.
|
||||
|
||||
--no-inherit::
|
||||
Child tasks do not inherit counters.
|
||||
|
||||
--mmap-pages=::
|
||||
Number of mmap data pages. Must be a power of two.
|
||||
|
||||
--cpu::
|
||||
Collect samples only on the list of CPUs provided. Multiple CPUs can be provided as a
|
||||
comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
|
||||
In per-thread mode with inheritance mode on (default), Events are captured only when
|
||||
the thread executes on the designated CPUs. Default is to monitor all CPUs.
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
linkperf:perf-record[1], linkperf:perf-script[1]
|
@ -10,8 +10,12 @@ include/linux/stringify.h
|
||||
lib/rbtree.c
|
||||
include/linux/swab.h
|
||||
arch/*/include/asm/unistd*.h
|
||||
arch/*/include/asm/perf_regs.h
|
||||
arch/*/lib/memcpy*.S
|
||||
arch/*/lib/memset*.S
|
||||
include/linux/poison.h
|
||||
include/linux/magic.h
|
||||
include/linux/hw_breakpoint.h
|
||||
arch/x86/include/asm/svm.h
|
||||
arch/x86/include/asm/vmx.h
|
||||
arch/x86/include/asm/kvm_host.h
|
||||
|
@ -37,7 +37,14 @@ include config/utilities.mak
|
||||
#
|
||||
# Define NO_NEWT if you do not want TUI support.
|
||||
#
|
||||
# Define NO_GTK2 if you do not want GTK+ GUI support.
|
||||
#
|
||||
# Define NO_DEMANGLE if you do not want C++ symbol demangling.
|
||||
#
|
||||
# Define NO_LIBELF if you do not want libelf dependency (e.g. cross-builds)
|
||||
#
|
||||
# Define NO_LIBUNWIND if you do not want libunwind dependency for dwarf
|
||||
# backtrace post unwind.
|
||||
|
||||
$(OUTPUT)PERF-VERSION-FILE: .FORCE-PERF-VERSION-FILE
|
||||
@$(SHELL_PATH) util/PERF-VERSION-GEN $(OUTPUT)
|
||||
@ -50,16 +57,19 @@ ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/i386/ -e s/sun4u/sparc64/ \
|
||||
-e s/s390x/s390/ -e s/parisc64/parisc/ \
|
||||
-e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
|
||||
-e s/sh[234].*/sh/ )
|
||||
NO_PERF_REGS := 1
|
||||
|
||||
CC = $(CROSS_COMPILE)gcc
|
||||
AR = $(CROSS_COMPILE)ar
|
||||
|
||||
# Additional ARCH settings for x86
|
||||
ifeq ($(ARCH),i386)
|
||||
ARCH := x86
|
||||
override ARCH := x86
|
||||
NO_PERF_REGS := 0
|
||||
LIBUNWIND_LIBS = -lunwind -lunwind-x86
|
||||
endif
|
||||
ifeq ($(ARCH),x86_64)
|
||||
ARCH := x86
|
||||
override ARCH := x86
|
||||
IS_X86_64 := 0
|
||||
ifeq (, $(findstring m32,$(EXTRA_CFLAGS)))
|
||||
IS_X86_64 := $(shell echo __x86_64__ | ${CC} -E -xc - | tail -n 1)
|
||||
@ -69,6 +79,8 @@ ifeq ($(ARCH),x86_64)
|
||||
ARCH_CFLAGS := -DARCH_X86_64
|
||||
ARCH_INCLUDE = ../../arch/x86/lib/memcpy_64.S ../../arch/x86/lib/memset_64.S
|
||||
endif
|
||||
NO_PERF_REGS := 0
|
||||
LIBUNWIND_LIBS = -lunwind -lunwind-x86_64
|
||||
endif
|
||||
|
||||
# Treat warnings as errors unless directed not to
|
||||
@ -89,7 +101,7 @@ ifdef PARSER_DEBUG
|
||||
PARSER_DEBUG_CFLAGS := -DPARSER_DEBUG
|
||||
endif
|
||||
|
||||
CFLAGS = -fno-omit-frame-pointer -ggdb3 -Wall -Wextra -std=gnu99 $(CFLAGS_WERROR) $(CFLAGS_OPTIMIZE) $(EXTRA_WARNINGS) $(EXTRA_CFLAGS) $(PARSER_DEBUG_CFLAGS)
|
||||
CFLAGS = -fno-omit-frame-pointer -ggdb3 -funwind-tables -Wall -Wextra -std=gnu99 $(CFLAGS_WERROR) $(CFLAGS_OPTIMIZE) $(EXTRA_WARNINGS) $(EXTRA_CFLAGS) $(PARSER_DEBUG_CFLAGS)
|
||||
EXTLIBS = -lpthread -lrt -lelf -lm
|
||||
ALL_CFLAGS = $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE
|
||||
ALL_LDFLAGS = $(LDFLAGS)
|
||||
@ -186,10 +198,10 @@ SCRIPTS = $(patsubst %.sh,%,$(SCRIPT_SH))
|
||||
|
||||
TRACE_EVENT_DIR = ../lib/traceevent/
|
||||
|
||||
ifeq ("$(origin O)", "command line")
|
||||
TE_PATH=$(OUTPUT)/
|
||||
ifneq ($(OUTPUT),)
|
||||
TE_PATH=$(OUTPUT)
|
||||
else
|
||||
TE_PATH=$(TRACE_EVENT_DIR)/
|
||||
TE_PATH=$(TRACE_EVENT_DIR)
|
||||
endif
|
||||
|
||||
LIBTRACEEVENT = $(TE_PATH)libtraceevent.a
|
||||
@ -221,13 +233,13 @@ export PERL_PATH
|
||||
FLEX = flex
|
||||
BISON= bison
|
||||
|
||||
$(OUTPUT)util/parse-events-flex.c: util/parse-events.l
|
||||
$(OUTPUT)util/parse-events-flex.c: util/parse-events.l $(OUTPUT)util/parse-events-bison.c
|
||||
$(QUIET_FLEX)$(FLEX) --header-file=$(OUTPUT)util/parse-events-flex.h $(PARSER_DEBUG_FLEX) -t util/parse-events.l > $(OUTPUT)util/parse-events-flex.c
|
||||
|
||||
$(OUTPUT)util/parse-events-bison.c: util/parse-events.y
|
||||
$(QUIET_BISON)$(BISON) -v util/parse-events.y -d $(PARSER_DEBUG_BISON) -o $(OUTPUT)util/parse-events-bison.c
|
||||
|
||||
$(OUTPUT)util/pmu-flex.c: util/pmu.l
|
||||
$(OUTPUT)util/pmu-flex.c: util/pmu.l $(OUTPUT)util/pmu-bison.c
|
||||
$(QUIET_FLEX)$(FLEX) --header-file=$(OUTPUT)util/pmu-flex.h -t util/pmu.l > $(OUTPUT)util/pmu-flex.c
|
||||
|
||||
$(OUTPUT)util/pmu-bison.c: util/pmu.y
|
||||
@ -252,6 +264,7 @@ LIB_H += util/include/linux/ctype.h
|
||||
LIB_H += util/include/linux/kernel.h
|
||||
LIB_H += util/include/linux/list.h
|
||||
LIB_H += util/include/linux/export.h
|
||||
LIB_H += util/include/linux/magic.h
|
||||
LIB_H += util/include/linux/poison.h
|
||||
LIB_H += util/include/linux/prefetch.h
|
||||
LIB_H += util/include/linux/rbtree.h
|
||||
@ -321,6 +334,10 @@ LIB_H += $(TRACE_EVENT_DIR)event-parse.h
|
||||
LIB_H += util/target.h
|
||||
LIB_H += util/rblist.h
|
||||
LIB_H += util/intlist.h
|
||||
LIB_H += util/perf_regs.h
|
||||
LIB_H += util/unwind.h
|
||||
LIB_H += ui/helpline.h
|
||||
LIB_H += util/vdso.h
|
||||
|
||||
LIB_OBJS += $(OUTPUT)util/abspath.o
|
||||
LIB_OBJS += $(OUTPUT)util/alias.o
|
||||
@ -356,6 +373,7 @@ LIB_OBJS += $(OUTPUT)util/usage.o
|
||||
LIB_OBJS += $(OUTPUT)util/wrapper.o
|
||||
LIB_OBJS += $(OUTPUT)util/sigchain.o
|
||||
LIB_OBJS += $(OUTPUT)util/symbol.o
|
||||
LIB_OBJS += $(OUTPUT)util/symbol-elf.o
|
||||
LIB_OBJS += $(OUTPUT)util/dso-test-data.o
|
||||
LIB_OBJS += $(OUTPUT)util/color.o
|
||||
LIB_OBJS += $(OUTPUT)util/pager.o
|
||||
@ -387,11 +405,15 @@ LIB_OBJS += $(OUTPUT)util/cgroup.o
|
||||
LIB_OBJS += $(OUTPUT)util/target.o
|
||||
LIB_OBJS += $(OUTPUT)util/rblist.o
|
||||
LIB_OBJS += $(OUTPUT)util/intlist.o
|
||||
LIB_OBJS += $(OUTPUT)util/vdso.o
|
||||
LIB_OBJS += $(OUTPUT)util/stat.o
|
||||
|
||||
LIB_OBJS += $(OUTPUT)ui/helpline.o
|
||||
LIB_OBJS += $(OUTPUT)ui/hist.o
|
||||
LIB_OBJS += $(OUTPUT)ui/stdio/hist.o
|
||||
|
||||
BUILTIN_OBJS += $(OUTPUT)builtin-annotate.o
|
||||
|
||||
BUILTIN_OBJS += $(OUTPUT)builtin-bench.o
|
||||
|
||||
# Benchmark modules
|
||||
BUILTIN_OBJS += $(OUTPUT)bench/sched-messaging.o
|
||||
BUILTIN_OBJS += $(OUTPUT)bench/sched-pipe.o
|
||||
@ -449,13 +471,38 @@ PYRF_OBJS += $(OUTPUT)util/xyarray.o
|
||||
-include config.mak.autogen
|
||||
-include config.mak
|
||||
|
||||
ifndef NO_DWARF
|
||||
FLAGS_DWARF=$(ALL_CFLAGS) -ldw -lelf $(ALL_LDFLAGS) $(EXTLIBS)
|
||||
ifneq ($(call try-cc,$(SOURCE_DWARF),$(FLAGS_DWARF)),y)
|
||||
msg := $(warning No libdw.h found or old libdw.h found or elfutils is older than 0.138, disables dwarf support. Please install new elfutils-devel/libdw-dev);
|
||||
ifdef NO_LIBELF
|
||||
NO_DWARF := 1
|
||||
endif # Dwarf support
|
||||
endif # NO_DWARF
|
||||
NO_DEMANGLE := 1
|
||||
NO_LIBUNWIND := 1
|
||||
else
|
||||
FLAGS_LIBELF=$(ALL_CFLAGS) $(ALL_LDFLAGS) $(EXTLIBS)
|
||||
ifneq ($(call try-cc,$(SOURCE_LIBELF),$(FLAGS_LIBELF)),y)
|
||||
FLAGS_GLIBC=$(ALL_CFLAGS) $(ALL_LDFLAGS)
|
||||
ifneq ($(call try-cc,$(SOURCE_GLIBC),$(FLAGS_GLIBC)),y)
|
||||
msg := $(error No gnu/libc-version.h found, please install glibc-dev[el]/glibc-static);
|
||||
else
|
||||
NO_LIBELF := 1
|
||||
NO_DWARF := 1
|
||||
NO_DEMANGLE := 1
|
||||
endif
|
||||
endif
|
||||
endif # NO_LIBELF
|
||||
|
||||
ifndef NO_LIBUNWIND
|
||||
# for linking with debug library, run like:
|
||||
# make DEBUG=1 LIBUNWIND_DIR=/opt/libunwind/
|
||||
ifdef LIBUNWIND_DIR
|
||||
LIBUNWIND_CFLAGS := -I$(LIBUNWIND_DIR)/include
|
||||
LIBUNWIND_LDFLAGS := -L$(LIBUNWIND_DIR)/lib
|
||||
endif
|
||||
|
||||
FLAGS_UNWIND=$(LIBUNWIND_CFLAGS) $(ALL_CFLAGS) $(LIBUNWIND_LDFLAGS) $(ALL_LDFLAGS) $(EXTLIBS) $(LIBUNWIND_LIBS)
|
||||
ifneq ($(call try-cc,$(SOURCE_LIBUNWIND),$(FLAGS_UNWIND)),y)
|
||||
msg := $(warning No libunwind found, disabling post unwind support. Please install libunwind-dev[el] >= 0.99);
|
||||
NO_LIBUNWIND := 1
|
||||
endif # Libunwind support
|
||||
endif # NO_LIBUNWIND
|
||||
|
||||
-include arch/$(ARCH)/Makefile
|
||||
|
||||
@ -463,20 +510,34 @@ ifneq ($(OUTPUT),)
|
||||
BASIC_CFLAGS += -I$(OUTPUT)
|
||||
endif
|
||||
|
||||
FLAGS_LIBELF=$(ALL_CFLAGS) $(ALL_LDFLAGS) $(EXTLIBS)
|
||||
ifneq ($(call try-cc,$(SOURCE_LIBELF),$(FLAGS_LIBELF)),y)
|
||||
FLAGS_GLIBC=$(ALL_CFLAGS) $(ALL_LDFLAGS)
|
||||
ifneq ($(call try-cc,$(SOURCE_GLIBC),$(FLAGS_GLIBC)),y)
|
||||
msg := $(error No gnu/libc-version.h found, please install glibc-dev[el]/glibc-static);
|
||||
else
|
||||
msg := $(error No libelf.h/libelf found, please install libelf-dev/elfutils-libelf-devel);
|
||||
endif
|
||||
endif
|
||||
ifdef NO_LIBELF
|
||||
BASIC_CFLAGS += -DNO_LIBELF_SUPPORT
|
||||
|
||||
EXTLIBS := $(filter-out -lelf,$(EXTLIBS))
|
||||
|
||||
# Remove ELF/DWARF dependent codes
|
||||
LIB_OBJS := $(filter-out $(OUTPUT)util/symbol-elf.o,$(LIB_OBJS))
|
||||
LIB_OBJS := $(filter-out $(OUTPUT)util/dwarf-aux.o,$(LIB_OBJS))
|
||||
LIB_OBJS := $(filter-out $(OUTPUT)util/probe-event.o,$(LIB_OBJS))
|
||||
LIB_OBJS := $(filter-out $(OUTPUT)util/probe-finder.o,$(LIB_OBJS))
|
||||
|
||||
BUILTIN_OBJS := $(filter-out $(OUTPUT)builtin-probe.o,$(BUILTIN_OBJS))
|
||||
|
||||
# Use minimal symbol handling
|
||||
LIB_OBJS += $(OUTPUT)util/symbol-minimal.o
|
||||
|
||||
else # NO_LIBELF
|
||||
|
||||
ifneq ($(call try-cc,$(SOURCE_ELF_MMAP),$(FLAGS_COMMON)),y)
|
||||
BASIC_CFLAGS += -DLIBELF_NO_MMAP
|
||||
endif
|
||||
|
||||
FLAGS_DWARF=$(ALL_CFLAGS) -ldw -lelf $(ALL_LDFLAGS) $(EXTLIBS)
|
||||
ifneq ($(call try-cc,$(SOURCE_DWARF),$(FLAGS_DWARF)),y)
|
||||
msg := $(warning No libdw.h found or old libdw.h found or elfutils is older than 0.138, disables dwarf support. Please install new elfutils-devel/libdw-dev);
|
||||
NO_DWARF := 1
|
||||
endif # Dwarf support
|
||||
|
||||
ifndef NO_DWARF
|
||||
ifeq ($(origin PERF_HAVE_DWARF_REGS), undefined)
|
||||
msg := $(warning DWARF register mappings have not been defined for architecture $(ARCH), DWARF support disabled);
|
||||
@ -487,6 +548,29 @@ else
|
||||
LIB_OBJS += $(OUTPUT)util/dwarf-aux.o
|
||||
endif # PERF_HAVE_DWARF_REGS
|
||||
endif # NO_DWARF
|
||||
endif # NO_LIBELF
|
||||
|
||||
ifdef NO_LIBUNWIND
|
||||
BASIC_CFLAGS += -DNO_LIBUNWIND_SUPPORT
|
||||
else
|
||||
EXTLIBS += $(LIBUNWIND_LIBS)
|
||||
BASIC_CFLAGS := $(LIBUNWIND_CFLAGS) $(BASIC_CFLAGS)
|
||||
BASIC_LDFLAGS := $(LIBUNWIND_LDFLAGS) $(BASIC_LDFLAGS)
|
||||
LIB_OBJS += $(OUTPUT)util/unwind.o
|
||||
endif
|
||||
|
||||
ifdef NO_LIBAUDIT
|
||||
BASIC_CFLAGS += -DNO_LIBAUDIT_SUPPORT
|
||||
else
|
||||
FLAGS_LIBAUDIT = $(ALL_CFLAGS) $(ALL_LDFLAGS) -laudit
|
||||
ifneq ($(call try-cc,$(SOURCE_LIBAUDIT),$(FLAGS_LIBAUDIT)),y)
|
||||
msg := $(warning No libaudit.h found, disables 'trace' tool, please install audit-libs-devel or libaudit-dev);
|
||||
BASIC_CFLAGS += -DNO_LIBAUDIT_SUPPORT
|
||||
else
|
||||
BUILTIN_OBJS += $(OUTPUT)builtin-trace.o
|
||||
EXTLIBS += -laudit
|
||||
endif
|
||||
endif
|
||||
|
||||
ifdef NO_NEWT
|
||||
BASIC_CFLAGS += -DNO_NEWT_SUPPORT
|
||||
@ -504,14 +588,13 @@ else
|
||||
LIB_OBJS += $(OUTPUT)ui/browsers/annotate.o
|
||||
LIB_OBJS += $(OUTPUT)ui/browsers/hists.o
|
||||
LIB_OBJS += $(OUTPUT)ui/browsers/map.o
|
||||
LIB_OBJS += $(OUTPUT)ui/helpline.o
|
||||
LIB_OBJS += $(OUTPUT)ui/progress.o
|
||||
LIB_OBJS += $(OUTPUT)ui/util.o
|
||||
LIB_OBJS += $(OUTPUT)ui/tui/setup.o
|
||||
LIB_OBJS += $(OUTPUT)ui/tui/util.o
|
||||
LIB_OBJS += $(OUTPUT)ui/tui/helpline.o
|
||||
LIB_H += ui/browser.h
|
||||
LIB_H += ui/browsers/map.h
|
||||
LIB_H += ui/helpline.h
|
||||
LIB_H += ui/keysyms.h
|
||||
LIB_H += ui/libslang.h
|
||||
LIB_H += ui/progress.h
|
||||
@ -523,7 +606,7 @@ endif
|
||||
ifdef NO_GTK2
|
||||
BASIC_CFLAGS += -DNO_GTK2_SUPPORT
|
||||
else
|
||||
FLAGS_GTK2=$(ALL_CFLAGS) $(ALL_LDFLAGS) $(EXTLIBS) $(shell pkg-config --libs --cflags gtk+-2.0)
|
||||
FLAGS_GTK2=$(ALL_CFLAGS) $(ALL_LDFLAGS) $(EXTLIBS) $(shell pkg-config --libs --cflags gtk+-2.0 2>/dev/null)
|
||||
ifneq ($(call try-cc,$(SOURCE_GTK2),$(FLAGS_GTK2)),y)
|
||||
msg := $(warning GTK2 not found, disables GTK2 support. Please install gtk2-devel or libgtk2.0-dev);
|
||||
BASIC_CFLAGS += -DNO_GTK2_SUPPORT
|
||||
@ -531,11 +614,12 @@ else
|
||||
ifeq ($(call try-cc,$(SOURCE_GTK2_INFOBAR),$(FLAGS_GTK2)),y)
|
||||
BASIC_CFLAGS += -DHAVE_GTK_INFO_BAR
|
||||
endif
|
||||
BASIC_CFLAGS += $(shell pkg-config --cflags gtk+-2.0)
|
||||
EXTLIBS += $(shell pkg-config --libs gtk+-2.0)
|
||||
BASIC_CFLAGS += $(shell pkg-config --cflags gtk+-2.0 2>/dev/null)
|
||||
EXTLIBS += $(shell pkg-config --libs gtk+-2.0 2>/dev/null)
|
||||
LIB_OBJS += $(OUTPUT)ui/gtk/browser.o
|
||||
LIB_OBJS += $(OUTPUT)ui/gtk/setup.o
|
||||
LIB_OBJS += $(OUTPUT)ui/gtk/util.o
|
||||
LIB_OBJS += $(OUTPUT)ui/gtk/helpline.o
|
||||
# Make sure that it'd be included only once.
|
||||
ifneq ($(findstring -DNO_NEWT_SUPPORT,$(BASIC_CFLAGS)),)
|
||||
LIB_OBJS += $(OUTPUT)ui/setup.o
|
||||
@ -644,7 +728,7 @@ else
|
||||
EXTLIBS += -liberty
|
||||
BASIC_CFLAGS += -DHAVE_CPLUS_DEMANGLE
|
||||
else
|
||||
FLAGS_BFD=$(ALL_CFLAGS) $(ALL_LDFLAGS) $(EXTLIBS) -lbfd
|
||||
FLAGS_BFD=$(ALL_CFLAGS) $(ALL_LDFLAGS) $(EXTLIBS) -DPACKAGE='perf' -lbfd
|
||||
has_bfd := $(call try-cc,$(SOURCE_BFD),$(FLAGS_BFD))
|
||||
ifeq ($(has_bfd),y)
|
||||
EXTLIBS += -lbfd
|
||||
@ -674,6 +758,13 @@ else
|
||||
endif
|
||||
endif
|
||||
|
||||
ifeq ($(NO_PERF_REGS),0)
|
||||
ifeq ($(ARCH),x86)
|
||||
LIB_H += arch/x86/include/perf_regs.h
|
||||
endif
|
||||
else
|
||||
BASIC_CFLAGS += -DNO_PERF_REGS
|
||||
endif
|
||||
|
||||
ifdef NO_STRLCPY
|
||||
BASIC_CFLAGS += -DNO_STRLCPY
|
||||
@ -683,6 +774,14 @@ else
|
||||
endif
|
||||
endif
|
||||
|
||||
ifdef NO_BACKTRACE
|
||||
BASIC_CFLAGS += -DNO_BACKTRACE
|
||||
else
|
||||
ifneq ($(call try-cc,$(SOURCE_BACKTRACE),),y)
|
||||
BASIC_CFLAGS += -DNO_BACKTRACE
|
||||
endif
|
||||
endif
|
||||
|
||||
ifdef ASCIIDOC8
|
||||
export ASCIIDOC8
|
||||
endif
|
||||
@ -700,6 +799,7 @@ perfexecdir_SQ = $(subst ','\'',$(perfexecdir))
|
||||
template_dir_SQ = $(subst ','\'',$(template_dir))
|
||||
htmldir_SQ = $(subst ','\'',$(htmldir))
|
||||
prefix_SQ = $(subst ','\'',$(prefix))
|
||||
sysconfdir_SQ = $(subst ','\'',$(sysconfdir))
|
||||
|
||||
SHELL_PATH_SQ = $(subst ','\'',$(SHELL_PATH))
|
||||
|
||||
@ -767,10 +867,10 @@ $(OUTPUT)perf.o perf.spec \
|
||||
# over the general rule for .o
|
||||
|
||||
$(OUTPUT)util/%-flex.o: $(OUTPUT)util/%-flex.c $(OUTPUT)PERF-CFLAGS
|
||||
$(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) -Iutil/ -w $<
|
||||
$(QUIET_CC)$(CC) -o $@ -c -Iutil/ $(ALL_CFLAGS) -w $<
|
||||
|
||||
$(OUTPUT)util/%-bison.o: $(OUTPUT)util/%-bison.c $(OUTPUT)PERF-CFLAGS
|
||||
$(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) -DYYENABLE_NLS=0 -DYYLTYPE_IS_TRIVIAL=0 -Iutil/ -w $<
|
||||
$(QUIET_CC)$(CC) -o $@ -c -Iutil/ $(ALL_CFLAGS) -DYYENABLE_NLS=0 -DYYLTYPE_IS_TRIVIAL=0 -w $<
|
||||
|
||||
$(OUTPUT)%.o: %.c $(OUTPUT)PERF-CFLAGS
|
||||
$(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) $<
|
||||
@ -842,7 +942,10 @@ $(LIB_FILE): $(LIB_OBJS)
|
||||
|
||||
# libtraceevent.a
|
||||
$(LIBTRACEEVENT):
|
||||
$(QUIET_SUBDIR0)$(TRACE_EVENT_DIR) $(QUIET_SUBDIR1) $(COMMAND_O) libtraceevent.a
|
||||
$(QUIET_SUBDIR0)$(TRACE_EVENT_DIR) $(QUIET_SUBDIR1) O=$(OUTPUT) libtraceevent.a
|
||||
|
||||
$(LIBTRACEEVENT)-clean:
|
||||
$(QUIET_SUBDIR0)$(TRACE_EVENT_DIR) $(QUIET_SUBDIR1) O=$(OUTPUT) clean
|
||||
|
||||
help:
|
||||
@echo 'Perf make targets:'
|
||||
@ -951,6 +1054,8 @@ install: all
|
||||
$(INSTALL) scripts/python/Perf-Trace-Util/lib/Perf/Trace/* -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/python/Perf-Trace-Util/lib/Perf/Trace'
|
||||
$(INSTALL) scripts/python/*.py -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/python'
|
||||
$(INSTALL) scripts/python/bin/* -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/python/bin'
|
||||
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(sysconfdir_SQ)/bash_completion.d'
|
||||
$(INSTALL) bash_completion '$(DESTDIR_SQ)$(sysconfdir_SQ)/bash_completion.d/perf'
|
||||
|
||||
install-python_ext:
|
||||
$(PYTHON_WORD) util/setup.py --quiet install --root='/$(DESTDIR_SQ)'
|
||||
@ -981,7 +1086,7 @@ quick-install-html:
|
||||
|
||||
### Cleaning rules
|
||||
|
||||
clean:
|
||||
clean: $(LIBTRACEEVENT)-clean
|
||||
$(RM) $(LIB_OBJS) $(BUILTIN_OBJS) $(LIB_FILE) $(OUTPUT)perf-archive $(OUTPUT)perf.o $(LANG_BINDINGS)
|
||||
$(RM) $(ALL_PROGRAMS) perf
|
||||
$(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope*
|
||||
|
@ -2,4 +2,7 @@ ifndef NO_DWARF
|
||||
PERF_HAVE_DWARF_REGS := 1
|
||||
LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/dwarf-regs.o
|
||||
endif
|
||||
ifndef NO_LIBUNWIND
|
||||
LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/unwind.o
|
||||
endif
|
||||
LIB_OBJS += $(OUTPUT)arch/$(ARCH)/util/header.o
|
||||
|
80
tools/perf/arch/x86/include/perf_regs.h
Normal file
80
tools/perf/arch/x86/include/perf_regs.h
Normal file
@ -0,0 +1,80 @@
|
||||
#ifndef ARCH_PERF_REGS_H
|
||||
#define ARCH_PERF_REGS_H
|
||||
|
||||
#include <stdlib.h>
|
||||
#include "../../util/types.h"
|
||||
#include "../../../../../arch/x86/include/asm/perf_regs.h"
|
||||
|
||||
#ifndef ARCH_X86_64
|
||||
#define PERF_REGS_MASK ((1ULL << PERF_REG_X86_32_MAX) - 1)
|
||||
#else
|
||||
#define REG_NOSUPPORT ((1ULL << PERF_REG_X86_DS) | \
|
||||
(1ULL << PERF_REG_X86_ES) | \
|
||||
(1ULL << PERF_REG_X86_FS) | \
|
||||
(1ULL << PERF_REG_X86_GS))
|
||||
#define PERF_REGS_MASK (((1ULL << PERF_REG_X86_64_MAX) - 1) & ~REG_NOSUPPORT)
|
||||
#endif
|
||||
#define PERF_REG_IP PERF_REG_X86_IP
|
||||
#define PERF_REG_SP PERF_REG_X86_SP
|
||||
|
||||
static inline const char *perf_reg_name(int id)
|
||||
{
|
||||
switch (id) {
|
||||
case PERF_REG_X86_AX:
|
||||
return "AX";
|
||||
case PERF_REG_X86_BX:
|
||||
return "BX";
|
||||
case PERF_REG_X86_CX:
|
||||
return "CX";
|
||||
case PERF_REG_X86_DX:
|
||||
return "DX";
|
||||
case PERF_REG_X86_SI:
|
||||
return "SI";
|
||||
case PERF_REG_X86_DI:
|
||||
return "DI";
|
||||
case PERF_REG_X86_BP:
|
||||
return "BP";
|
||||
case PERF_REG_X86_SP:
|
||||
return "SP";
|
||||
case PERF_REG_X86_IP:
|
||||
return "IP";
|
||||
case PERF_REG_X86_FLAGS:
|
||||
return "FLAGS";
|
||||
case PERF_REG_X86_CS:
|
||||
return "CS";
|
||||
case PERF_REG_X86_SS:
|
||||
return "SS";
|
||||
case PERF_REG_X86_DS:
|
||||
return "DS";
|
||||
case PERF_REG_X86_ES:
|
||||
return "ES";
|
||||
case PERF_REG_X86_FS:
|
||||
return "FS";
|
||||
case PERF_REG_X86_GS:
|
||||
return "GS";
|
||||
#ifdef ARCH_X86_64
|
||||
case PERF_REG_X86_R8:
|
||||
return "R8";
|
||||
case PERF_REG_X86_R9:
|
||||
return "R9";
|
||||
case PERF_REG_X86_R10:
|
||||
return "R10";
|
||||
case PERF_REG_X86_R11:
|
||||
return "R11";
|
||||
case PERF_REG_X86_R12:
|
||||
return "R12";
|
||||
case PERF_REG_X86_R13:
|
||||
return "R13";
|
||||
case PERF_REG_X86_R14:
|
||||
return "R14";
|
||||
case PERF_REG_X86_R15:
|
||||
return "R15";
|
||||
#endif /* ARCH_X86_64 */
|
||||
default:
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#endif /* ARCH_PERF_REGS_H */
|
111
tools/perf/arch/x86/util/unwind.c
Normal file
111
tools/perf/arch/x86/util/unwind.c
Normal file
@ -0,0 +1,111 @@
|
||||
|
||||
#include <errno.h>
|
||||
#include <libunwind.h>
|
||||
#include "perf_regs.h"
|
||||
#include "../../util/unwind.h"
|
||||
|
||||
#ifdef ARCH_X86_64
|
||||
int unwind__arch_reg_id(int regnum)
|
||||
{
|
||||
int id;
|
||||
|
||||
switch (regnum) {
|
||||
case UNW_X86_64_RAX:
|
||||
id = PERF_REG_X86_AX;
|
||||
break;
|
||||
case UNW_X86_64_RDX:
|
||||
id = PERF_REG_X86_DX;
|
||||
break;
|
||||
case UNW_X86_64_RCX:
|
||||
id = PERF_REG_X86_CX;
|
||||
break;
|
||||
case UNW_X86_64_RBX:
|
||||
id = PERF_REG_X86_BX;
|
||||
break;
|
||||
case UNW_X86_64_RSI:
|
||||
id = PERF_REG_X86_SI;
|
||||
break;
|
||||
case UNW_X86_64_RDI:
|
||||
id = PERF_REG_X86_DI;
|
||||
break;
|
||||
case UNW_X86_64_RBP:
|
||||
id = PERF_REG_X86_BP;
|
||||
break;
|
||||
case UNW_X86_64_RSP:
|
||||
id = PERF_REG_X86_SP;
|
||||
break;
|
||||
case UNW_X86_64_R8:
|
||||
id = PERF_REG_X86_R8;
|
||||
break;
|
||||
case UNW_X86_64_R9:
|
||||
id = PERF_REG_X86_R9;
|
||||
break;
|
||||
case UNW_X86_64_R10:
|
||||
id = PERF_REG_X86_R10;
|
||||
break;
|
||||
case UNW_X86_64_R11:
|
||||
id = PERF_REG_X86_R11;
|
||||
break;
|
||||
case UNW_X86_64_R12:
|
||||
id = PERF_REG_X86_R12;
|
||||
break;
|
||||
case UNW_X86_64_R13:
|
||||
id = PERF_REG_X86_R13;
|
||||
break;
|
||||
case UNW_X86_64_R14:
|
||||
id = PERF_REG_X86_R14;
|
||||
break;
|
||||
case UNW_X86_64_R15:
|
||||
id = PERF_REG_X86_R15;
|
||||
break;
|
||||
case UNW_X86_64_RIP:
|
||||
id = PERF_REG_X86_IP;
|
||||
break;
|
||||
default:
|
||||
pr_err("unwind: invalid reg id %d\n", regnum);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return id;
|
||||
}
|
||||
#else
|
||||
int unwind__arch_reg_id(int regnum)
|
||||
{
|
||||
int id;
|
||||
|
||||
switch (regnum) {
|
||||
case UNW_X86_EAX:
|
||||
id = PERF_REG_X86_AX;
|
||||
break;
|
||||
case UNW_X86_EDX:
|
||||
id = PERF_REG_X86_DX;
|
||||
break;
|
||||
case UNW_X86_ECX:
|
||||
id = PERF_REG_X86_CX;
|
||||
break;
|
||||
case UNW_X86_EBX:
|
||||
id = PERF_REG_X86_BX;
|
||||
break;
|
||||
case UNW_X86_ESI:
|
||||
id = PERF_REG_X86_SI;
|
||||
break;
|
||||
case UNW_X86_EDI:
|
||||
id = PERF_REG_X86_DI;
|
||||
break;
|
||||
case UNW_X86_EBP:
|
||||
id = PERF_REG_X86_BP;
|
||||
break;
|
||||
case UNW_X86_ESP:
|
||||
id = PERF_REG_X86_SP;
|
||||
break;
|
||||
case UNW_X86_EIP:
|
||||
id = PERF_REG_X86_IP;
|
||||
break;
|
||||
default:
|
||||
pr_err("unwind: invalid reg id %d\n", regnum);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return id;
|
||||
}
|
||||
#endif /* ARCH_X86_64 */
|
26
tools/perf/bash_completion
Normal file
26
tools/perf/bash_completion
Normal file
@ -0,0 +1,26 @@
|
||||
# perf completion
|
||||
|
||||
have perf &&
|
||||
_perf()
|
||||
{
|
||||
local cur cmd
|
||||
|
||||
COMPREPLY=()
|
||||
_get_comp_words_by_ref cur prev
|
||||
|
||||
cmd=${COMP_WORDS[0]}
|
||||
|
||||
# List perf subcommands
|
||||
if [ $COMP_CWORD -eq 1 ]; then
|
||||
cmds=$($cmd --list-cmds)
|
||||
COMPREPLY=( $( compgen -W '$cmds' -- "$cur" ) )
|
||||
# List possible events for -e option
|
||||
elif [[ $prev == "-e" && "${COMP_WORDS[1]}" == @(record|stat|top) ]]; then
|
||||
cmds=$($cmd list --raw-dump)
|
||||
COMPREPLY=( $( compgen -W '$cmds' -- "$cur" ) )
|
||||
# Fall down to list regular files
|
||||
else
|
||||
_filedir
|
||||
fi
|
||||
} &&
|
||||
complete -F _perf perf
|
@ -3,7 +3,8 @@
|
||||
|
||||
extern int bench_sched_messaging(int argc, const char **argv, const char *prefix);
|
||||
extern int bench_sched_pipe(int argc, const char **argv, const char *prefix);
|
||||
extern int bench_mem_memcpy(int argc, const char **argv, const char *prefix __used);
|
||||
extern int bench_mem_memcpy(int argc, const char **argv,
|
||||
const char *prefix __maybe_unused);
|
||||
extern int bench_mem_memset(int argc, const char **argv, const char *prefix);
|
||||
|
||||
#define BENCH_FORMAT_DEFAULT_STR "default"
|
||||
|
@ -177,7 +177,7 @@ static double do_memcpy_gettimeofday(memcpy_t fn, size_t len, bool prefault)
|
||||
} while (0)
|
||||
|
||||
int bench_mem_memcpy(int argc, const char **argv,
|
||||
const char *prefix __used)
|
||||
const char *prefix __maybe_unused)
|
||||
{
|
||||
int i;
|
||||
size_t len;
|
||||
|
@ -171,7 +171,7 @@ static double do_memset_gettimeofday(memset_t fn, size_t len, bool prefault)
|
||||
} while (0)
|
||||
|
||||
int bench_mem_memset(int argc, const char **argv,
|
||||
const char *prefix __used)
|
||||
const char *prefix __maybe_unused)
|
||||
{
|
||||
int i;
|
||||
size_t len;
|
||||
|
@ -267,7 +267,7 @@ static const char * const bench_sched_message_usage[] = {
|
||||
};
|
||||
|
||||
int bench_sched_messaging(int argc, const char **argv,
|
||||
const char *prefix __used)
|
||||
const char *prefix __maybe_unused)
|
||||
{
|
||||
unsigned int i, total_children;
|
||||
struct timeval start, stop, diff;
|
||||
|
@ -43,7 +43,7 @@ static const char * const bench_sched_pipe_usage[] = {
|
||||
};
|
||||
|
||||
int bench_sched_pipe(int argc, const char **argv,
|
||||
const char *prefix __used)
|
||||
const char *prefix __maybe_unused)
|
||||
{
|
||||
int pipe_1[2], pipe_2[2];
|
||||
int m = 0, i;
|
||||
@ -55,14 +55,14 @@ int bench_sched_pipe(int argc, const char **argv,
|
||||
* discarding returned value of read(), write()
|
||||
* causes error in building environment for perf
|
||||
*/
|
||||
int __used ret, wait_stat;
|
||||
pid_t pid, retpid;
|
||||
int __maybe_unused ret, wait_stat;
|
||||
pid_t pid, retpid __maybe_unused;
|
||||
|
||||
argc = parse_options(argc, argv, options,
|
||||
bench_sched_pipe_usage, 0);
|
||||
|
||||
assert(!pipe(pipe_1));
|
||||
assert(!pipe(pipe_2));
|
||||
BUG_ON(pipe(pipe_1));
|
||||
BUG_ON(pipe(pipe_2));
|
||||
|
||||
pid = fork();
|
||||
assert(pid >= 0);
|
||||
|
@ -239,7 +239,7 @@ static const char * const annotate_usage[] = {
|
||||
NULL
|
||||
};
|
||||
|
||||
int cmd_annotate(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_annotate(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
struct perf_annotate annotate = {
|
||||
.tool = {
|
||||
@ -282,6 +282,8 @@ int cmd_annotate(int argc, const char **argv, const char *prefix __used)
|
||||
"Display raw encoding of assembly instructions (default)"),
|
||||
OPT_STRING('M', "disassembler-style", &disassembler_style, "disassembler style",
|
||||
"Specify disassembler style (e.g. -M intel for intel syntax)"),
|
||||
OPT_STRING(0, "objdump", &objdump_path, "path",
|
||||
"objdump binary to use for disassembly and annotations"),
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
|
@ -173,7 +173,7 @@ static void all_subsystem(void)
|
||||
all_suite(&subsystems[i]);
|
||||
}
|
||||
|
||||
int cmd_bench(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_bench(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
int i, j, status = 0;
|
||||
|
||||
|
@ -43,15 +43,16 @@ static int build_id_cache__add_file(const char *filename, const char *debugdir)
|
||||
}
|
||||
|
||||
build_id__sprintf(build_id, sizeof(build_id), sbuild_id);
|
||||
err = build_id_cache__add_s(sbuild_id, debugdir, filename, false);
|
||||
err = build_id_cache__add_s(sbuild_id, debugdir, filename,
|
||||
false, false);
|
||||
if (verbose)
|
||||
pr_info("Adding %s %s: %s\n", sbuild_id, filename,
|
||||
err ? "FAIL" : "Ok");
|
||||
return err;
|
||||
}
|
||||
|
||||
static int build_id_cache__remove_file(const char *filename __used,
|
||||
const char *debugdir __used)
|
||||
static int build_id_cache__remove_file(const char *filename __maybe_unused,
|
||||
const char *debugdir __maybe_unused)
|
||||
{
|
||||
u8 build_id[BUILD_ID_SIZE];
|
||||
char sbuild_id[BUILD_ID_SIZE * 2 + 1];
|
||||
@ -119,7 +120,8 @@ static int __cmd_buildid_cache(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cmd_buildid_cache(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_buildid_cache(int argc, const char **argv,
|
||||
const char *prefix __maybe_unused)
|
||||
{
|
||||
argc = parse_options(argc, argv, buildid_cache_options,
|
||||
buildid_cache_usage, 0);
|
||||
|
@ -16,8 +16,6 @@
|
||||
#include "util/session.h"
|
||||
#include "util/symbol.h"
|
||||
|
||||
#include <libelf.h>
|
||||
|
||||
static const char *input_name;
|
||||
static bool force;
|
||||
static bool show_kernel;
|
||||
@ -71,7 +69,7 @@ static int perf_session__list_build_ids(void)
|
||||
{
|
||||
struct perf_session *session;
|
||||
|
||||
elf_version(EV_CURRENT);
|
||||
symbol__elf_init();
|
||||
|
||||
session = perf_session__new(input_name, O_RDONLY, force, false,
|
||||
&build_id__mark_dso_hit_ops);
|
||||
@ -105,7 +103,8 @@ static int __cmd_buildid_list(void)
|
||||
return perf_session__list_build_ids();
|
||||
}
|
||||
|
||||
int cmd_buildid_list(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_buildid_list(int argc, const char **argv,
|
||||
const char *prefix __maybe_unused)
|
||||
{
|
||||
argc = parse_options(argc, argv, options, buildid_list_usage, 0);
|
||||
setup_pager();
|
||||
|
@ -10,6 +10,7 @@
|
||||
#include "util/event.h"
|
||||
#include "util/hist.h"
|
||||
#include "util/evsel.h"
|
||||
#include "util/evlist.h"
|
||||
#include "util/session.h"
|
||||
#include "util/tool.h"
|
||||
#include "util/sort.h"
|
||||
@ -24,11 +25,6 @@ static char diff__default_sort_order[] = "dso,symbol";
|
||||
static bool force;
|
||||
static bool show_displacement;
|
||||
|
||||
struct perf_diff {
|
||||
struct perf_tool tool;
|
||||
struct perf_session *session;
|
||||
};
|
||||
|
||||
static int hists__add_entry(struct hists *self,
|
||||
struct addr_location *al, u64 period)
|
||||
{
|
||||
@ -37,14 +33,12 @@ static int hists__add_entry(struct hists *self,
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static int diff__process_sample_event(struct perf_tool *tool,
|
||||
static int diff__process_sample_event(struct perf_tool *tool __maybe_unused,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample,
|
||||
struct perf_evsel *evsel __used,
|
||||
struct perf_evsel *evsel,
|
||||
struct machine *machine)
|
||||
{
|
||||
struct perf_diff *_diff = container_of(tool, struct perf_diff, tool);
|
||||
struct perf_session *session = _diff->session;
|
||||
struct addr_location al;
|
||||
|
||||
if (perf_event__preprocess_sample(event, machine, &al, sample, NULL) < 0) {
|
||||
@ -56,26 +50,24 @@ static int diff__process_sample_event(struct perf_tool *tool,
|
||||
if (al.filtered || al.sym == NULL)
|
||||
return 0;
|
||||
|
||||
if (hists__add_entry(&session->hists, &al, sample->period)) {
|
||||
if (hists__add_entry(&evsel->hists, &al, sample->period)) {
|
||||
pr_warning("problem incrementing symbol period, skipping event\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
session->hists.stats.total_period += sample->period;
|
||||
evsel->hists.stats.total_period += sample->period;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct perf_diff diff = {
|
||||
.tool = {
|
||||
.sample = diff__process_sample_event,
|
||||
.mmap = perf_event__process_mmap,
|
||||
.comm = perf_event__process_comm,
|
||||
.exit = perf_event__process_task,
|
||||
.fork = perf_event__process_task,
|
||||
.lost = perf_event__process_lost,
|
||||
.ordered_samples = true,
|
||||
.ordering_requires_timestamps = true,
|
||||
},
|
||||
static struct perf_tool tool = {
|
||||
.sample = diff__process_sample_event,
|
||||
.mmap = perf_event__process_mmap,
|
||||
.comm = perf_event__process_comm,
|
||||
.exit = perf_event__process_task,
|
||||
.fork = perf_event__process_task,
|
||||
.lost = perf_event__process_lost,
|
||||
.ordered_samples = true,
|
||||
.ordering_requires_timestamps = true,
|
||||
};
|
||||
|
||||
static void perf_session__insert_hist_entry_by_name(struct rb_root *root,
|
||||
@ -146,34 +138,71 @@ static void hists__match(struct hists *older, struct hists *newer)
|
||||
}
|
||||
}
|
||||
|
||||
static struct perf_evsel *evsel_match(struct perf_evsel *evsel,
|
||||
struct perf_evlist *evlist)
|
||||
{
|
||||
struct perf_evsel *e;
|
||||
|
||||
list_for_each_entry(e, &evlist->entries, node)
|
||||
if (perf_evsel__match2(evsel, e))
|
||||
return e;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int __cmd_diff(void)
|
||||
{
|
||||
int ret, i;
|
||||
#define older (session[0])
|
||||
#define newer (session[1])
|
||||
struct perf_session *session[2];
|
||||
struct perf_evlist *evlist_new, *evlist_old;
|
||||
struct perf_evsel *evsel;
|
||||
bool first = true;
|
||||
|
||||
older = perf_session__new(input_old, O_RDONLY, force, false,
|
||||
&diff.tool);
|
||||
&tool);
|
||||
newer = perf_session__new(input_new, O_RDONLY, force, false,
|
||||
&diff.tool);
|
||||
&tool);
|
||||
if (session[0] == NULL || session[1] == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < 2; ++i) {
|
||||
diff.session = session[i];
|
||||
ret = perf_session__process_events(session[i], &diff.tool);
|
||||
ret = perf_session__process_events(session[i], &tool);
|
||||
if (ret)
|
||||
goto out_delete;
|
||||
hists__output_resort(&session[i]->hists);
|
||||
}
|
||||
|
||||
if (show_displacement)
|
||||
hists__resort_entries(&older->hists);
|
||||
evlist_old = older->evlist;
|
||||
evlist_new = newer->evlist;
|
||||
|
||||
list_for_each_entry(evsel, &evlist_new->entries, node)
|
||||
hists__output_resort(&evsel->hists);
|
||||
|
||||
list_for_each_entry(evsel, &evlist_old->entries, node) {
|
||||
hists__output_resort(&evsel->hists);
|
||||
|
||||
if (show_displacement)
|
||||
hists__resort_entries(&evsel->hists);
|
||||
}
|
||||
|
||||
list_for_each_entry(evsel, &evlist_new->entries, node) {
|
||||
struct perf_evsel *evsel_old;
|
||||
|
||||
evsel_old = evsel_match(evsel, evlist_old);
|
||||
if (!evsel_old)
|
||||
continue;
|
||||
|
||||
fprintf(stdout, "%s# Event '%s'\n#\n", first ? "" : "\n",
|
||||
perf_evsel__name(evsel));
|
||||
|
||||
first = false;
|
||||
|
||||
hists__match(&evsel_old->hists, &evsel->hists);
|
||||
hists__fprintf(&evsel->hists, &evsel_old->hists,
|
||||
show_displacement, true, 0, 0, stdout);
|
||||
}
|
||||
|
||||
hists__match(&older->hists, &newer->hists);
|
||||
hists__fprintf(&newer->hists, &older->hists,
|
||||
show_displacement, true, 0, 0, stdout);
|
||||
out_delete:
|
||||
for (i = 0; i < 2; ++i)
|
||||
perf_session__delete(session[i]);
|
||||
@ -213,7 +242,7 @@ static const struct option options[] = {
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
int cmd_diff(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_diff(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
sort_order = diff__default_sort_order;
|
||||
argc = parse_options(argc, argv, options, diff_usage, 0);
|
||||
@ -235,6 +264,7 @@ int cmd_diff(int argc, const char **argv, const char *prefix __used)
|
||||
if (symbol__init() < 0)
|
||||
return -1;
|
||||
|
||||
perf_hpp__init(true, show_displacement);
|
||||
setup_sorting(diff_usage, options);
|
||||
setup_pager();
|
||||
|
||||
|
@ -113,7 +113,7 @@ static const char * const evlist_usage[] = {
|
||||
NULL
|
||||
};
|
||||
|
||||
int cmd_evlist(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_evlist(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
struct perf_attr_details details = { .verbose = false, };
|
||||
const char *input_name = NULL;
|
||||
|
@ -24,13 +24,14 @@ static struct man_viewer_info_list {
|
||||
} *man_viewer_info_list;
|
||||
|
||||
enum help_format {
|
||||
HELP_FORMAT_NONE,
|
||||
HELP_FORMAT_MAN,
|
||||
HELP_FORMAT_INFO,
|
||||
HELP_FORMAT_WEB,
|
||||
};
|
||||
|
||||
static bool show_all = false;
|
||||
static enum help_format help_format = HELP_FORMAT_MAN;
|
||||
static enum help_format help_format = HELP_FORMAT_NONE;
|
||||
static struct option builtin_help_options[] = {
|
||||
OPT_BOOLEAN('a', "all", &show_all, "print all available commands"),
|
||||
OPT_SET_UINT('m', "man", &help_format, "show man page", HELP_FORMAT_MAN),
|
||||
@ -54,7 +55,9 @@ static enum help_format parse_help_format(const char *format)
|
||||
return HELP_FORMAT_INFO;
|
||||
if (!strcmp(format, "web") || !strcmp(format, "html"))
|
||||
return HELP_FORMAT_WEB;
|
||||
die("unrecognized help format '%s'", format);
|
||||
|
||||
pr_err("unrecognized help format '%s'", format);
|
||||
return HELP_FORMAT_NONE;
|
||||
}
|
||||
|
||||
static const char *get_man_viewer_info(const char *name)
|
||||
@ -259,6 +262,8 @@ static int perf_help_config(const char *var, const char *value, void *cb)
|
||||
if (!value)
|
||||
return config_error_nonbool(var);
|
||||
help_format = parse_help_format(value);
|
||||
if (help_format == HELP_FORMAT_NONE)
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
||||
if (!strcmp(var, "man.viewer")) {
|
||||
@ -352,7 +357,7 @@ static void exec_viewer(const char *name, const char *page)
|
||||
warning("'%s': unknown man viewer.", name);
|
||||
}
|
||||
|
||||
static void show_man_page(const char *perf_cmd)
|
||||
static int show_man_page(const char *perf_cmd)
|
||||
{
|
||||
struct man_viewer_list *viewer;
|
||||
const char *page = cmd_to_page(perf_cmd);
|
||||
@ -365,28 +370,35 @@ static void show_man_page(const char *perf_cmd)
|
||||
if (fallback)
|
||||
exec_viewer(fallback, page);
|
||||
exec_viewer("man", page);
|
||||
die("no man viewer handled the request");
|
||||
|
||||
pr_err("no man viewer handled the request");
|
||||
return -1;
|
||||
}
|
||||
|
||||
static void show_info_page(const char *perf_cmd)
|
||||
static int show_info_page(const char *perf_cmd)
|
||||
{
|
||||
const char *page = cmd_to_page(perf_cmd);
|
||||
setenv("INFOPATH", system_path(PERF_INFO_PATH), 1);
|
||||
execlp("info", "info", "perfman", page, NULL);
|
||||
return -1;
|
||||
}
|
||||
|
||||
static void get_html_page_path(struct strbuf *page_path, const char *page)
|
||||
static int get_html_page_path(struct strbuf *page_path, const char *page)
|
||||
{
|
||||
struct stat st;
|
||||
const char *html_path = system_path(PERF_HTML_PATH);
|
||||
|
||||
/* Check that we have a perf documentation directory. */
|
||||
if (stat(mkpath("%s/perf.html", html_path), &st)
|
||||
|| !S_ISREG(st.st_mode))
|
||||
die("'%s': not a documentation directory.", html_path);
|
||||
|| !S_ISREG(st.st_mode)) {
|
||||
pr_err("'%s': not a documentation directory.", html_path);
|
||||
return -1;
|
||||
}
|
||||
|
||||
strbuf_init(page_path, 0);
|
||||
strbuf_addf(page_path, "%s/%s.html", html_path, page);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -401,19 +413,23 @@ static void open_html(const char *path)
|
||||
}
|
||||
#endif
|
||||
|
||||
static void show_html_page(const char *perf_cmd)
|
||||
static int show_html_page(const char *perf_cmd)
|
||||
{
|
||||
const char *page = cmd_to_page(perf_cmd);
|
||||
struct strbuf page_path; /* it leaks but we exec bellow */
|
||||
|
||||
get_html_page_path(&page_path, page);
|
||||
if (get_html_page_path(&page_path, page) != 0)
|
||||
return -1;
|
||||
|
||||
open_html(page_path.buf);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cmd_help(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_help(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
const char *alias;
|
||||
int rc = 0;
|
||||
|
||||
load_command_list("perf-", &main_cmds, &other_cmds);
|
||||
|
||||
@ -444,16 +460,20 @@ int cmd_help(int argc, const char **argv, const char *prefix __used)
|
||||
|
||||
switch (help_format) {
|
||||
case HELP_FORMAT_MAN:
|
||||
show_man_page(argv[0]);
|
||||
rc = show_man_page(argv[0]);
|
||||
break;
|
||||
case HELP_FORMAT_INFO:
|
||||
show_info_page(argv[0]);
|
||||
rc = show_info_page(argv[0]);
|
||||
break;
|
||||
case HELP_FORMAT_WEB:
|
||||
show_html_page(argv[0]);
|
||||
rc = show_html_page(argv[0]);
|
||||
break;
|
||||
case HELP_FORMAT_NONE:
|
||||
/* fall-through */
|
||||
default:
|
||||
rc = -1;
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return rc;
|
||||
}
|
||||
|
@ -17,9 +17,9 @@
|
||||
static char const *input_name = "-";
|
||||
static bool inject_build_ids;
|
||||
|
||||
static int perf_event__repipe_synth(struct perf_tool *tool __used,
|
||||
static int perf_event__repipe_synth(struct perf_tool *tool __maybe_unused,
|
||||
union perf_event *event,
|
||||
struct machine *machine __used)
|
||||
struct machine *machine __maybe_unused)
|
||||
{
|
||||
uint32_t size;
|
||||
void *buf = event;
|
||||
@ -40,7 +40,8 @@ static int perf_event__repipe_synth(struct perf_tool *tool __used,
|
||||
|
||||
static int perf_event__repipe_op2_synth(struct perf_tool *tool,
|
||||
union perf_event *event,
|
||||
struct perf_session *session __used)
|
||||
struct perf_session *session
|
||||
__maybe_unused)
|
||||
{
|
||||
return perf_event__repipe_synth(tool, event, NULL);
|
||||
}
|
||||
@ -52,13 +53,14 @@ static int perf_event__repipe_event_type_synth(struct perf_tool *tool,
|
||||
}
|
||||
|
||||
static int perf_event__repipe_tracing_data_synth(union perf_event *event,
|
||||
struct perf_session *session __used)
|
||||
struct perf_session *session
|
||||
__maybe_unused)
|
||||
{
|
||||
return perf_event__repipe_synth(NULL, event, NULL);
|
||||
}
|
||||
|
||||
static int perf_event__repipe_attr(union perf_event *event,
|
||||
struct perf_evlist **pevlist __used)
|
||||
struct perf_evlist **pevlist __maybe_unused)
|
||||
{
|
||||
int ret;
|
||||
ret = perf_event__process_attr(event, pevlist);
|
||||
@ -70,7 +72,7 @@ static int perf_event__repipe_attr(union perf_event *event,
|
||||
|
||||
static int perf_event__repipe(struct perf_tool *tool,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample __used,
|
||||
struct perf_sample *sample __maybe_unused,
|
||||
struct machine *machine)
|
||||
{
|
||||
return perf_event__repipe_synth(tool, event, machine);
|
||||
@ -78,8 +80,8 @@ static int perf_event__repipe(struct perf_tool *tool,
|
||||
|
||||
static int perf_event__repipe_sample(struct perf_tool *tool,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample __used,
|
||||
struct perf_evsel *evsel __used,
|
||||
struct perf_sample *sample __maybe_unused,
|
||||
struct perf_evsel *evsel __maybe_unused,
|
||||
struct machine *machine)
|
||||
{
|
||||
return perf_event__repipe_synth(tool, event, machine);
|
||||
@ -163,7 +165,7 @@ static int dso__inject_build_id(struct dso *self, struct perf_tool *tool,
|
||||
static int perf_event__inject_buildid(struct perf_tool *tool,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample,
|
||||
struct perf_evsel *evsel __used,
|
||||
struct perf_evsel *evsel __maybe_unused,
|
||||
struct machine *machine)
|
||||
{
|
||||
struct addr_location al;
|
||||
@ -191,10 +193,13 @@ static int perf_event__inject_buildid(struct perf_tool *tool,
|
||||
* If this fails, too bad, let the other side
|
||||
* account this as unresolved.
|
||||
*/
|
||||
} else
|
||||
} else {
|
||||
#ifndef NO_LIBELF_SUPPORT
|
||||
pr_warning("no symbols found in %s, maybe "
|
||||
"install a debug package?\n",
|
||||
al.map->dso->long_name);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -221,7 +226,7 @@ struct perf_tool perf_inject = {
|
||||
|
||||
extern volatile int session_done;
|
||||
|
||||
static void sig_handler(int sig __attribute__((__unused__)))
|
||||
static void sig_handler(int sig __maybe_unused)
|
||||
{
|
||||
session_done = 1;
|
||||
}
|
||||
@ -264,7 +269,7 @@ static const struct option options[] = {
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
int cmd_inject(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_inject(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
argc = parse_options(argc, argv, options, report_usage, 0);
|
||||
|
||||
|
@ -1,6 +1,8 @@
|
||||
#include "builtin.h"
|
||||
#include "perf.h"
|
||||
|
||||
#include "util/evlist.h"
|
||||
#include "util/evsel.h"
|
||||
#include "util/util.h"
|
||||
#include "util/cache.h"
|
||||
#include "util/symbol.h"
|
||||
@ -57,46 +59,52 @@ static unsigned long nr_allocs, nr_cross_allocs;
|
||||
|
||||
#define PATH_SYS_NODE "/sys/devices/system/node"
|
||||
|
||||
struct perf_kmem {
|
||||
struct perf_tool tool;
|
||||
struct perf_session *session;
|
||||
};
|
||||
|
||||
static void init_cpunode_map(void)
|
||||
static int init_cpunode_map(void)
|
||||
{
|
||||
FILE *fp;
|
||||
int i;
|
||||
int i, err = -1;
|
||||
|
||||
fp = fopen("/sys/devices/system/cpu/kernel_max", "r");
|
||||
if (!fp) {
|
||||
max_cpu_num = 4096;
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (fscanf(fp, "%d", &max_cpu_num) < 1) {
|
||||
pr_err("Failed to read 'kernel_max' from sysfs");
|
||||
goto out_close;
|
||||
}
|
||||
|
||||
if (fscanf(fp, "%d", &max_cpu_num) < 1)
|
||||
die("Failed to read 'kernel_max' from sysfs");
|
||||
max_cpu_num++;
|
||||
|
||||
cpunode_map = calloc(max_cpu_num, sizeof(int));
|
||||
if (!cpunode_map)
|
||||
die("calloc");
|
||||
if (!cpunode_map) {
|
||||
pr_err("%s: calloc failed\n", __func__);
|
||||
goto out_close;
|
||||
}
|
||||
|
||||
for (i = 0; i < max_cpu_num; i++)
|
||||
cpunode_map[i] = -1;
|
||||
|
||||
err = 0;
|
||||
out_close:
|
||||
fclose(fp);
|
||||
return err;
|
||||
}
|
||||
|
||||
static void setup_cpunode_map(void)
|
||||
static int setup_cpunode_map(void)
|
||||
{
|
||||
struct dirent *dent1, *dent2;
|
||||
DIR *dir1, *dir2;
|
||||
unsigned int cpu, mem;
|
||||
char buf[PATH_MAX];
|
||||
|
||||
init_cpunode_map();
|
||||
if (init_cpunode_map())
|
||||
return -1;
|
||||
|
||||
dir1 = opendir(PATH_SYS_NODE);
|
||||
if (!dir1)
|
||||
return;
|
||||
return -1;
|
||||
|
||||
while ((dent1 = readdir(dir1)) != NULL) {
|
||||
if (dent1->d_type != DT_DIR ||
|
||||
@ -116,10 +124,11 @@ static void setup_cpunode_map(void)
|
||||
closedir(dir2);
|
||||
}
|
||||
closedir(dir1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void insert_alloc_stat(unsigned long call_site, unsigned long ptr,
|
||||
int bytes_req, int bytes_alloc, int cpu)
|
||||
static int insert_alloc_stat(unsigned long call_site, unsigned long ptr,
|
||||
int bytes_req, int bytes_alloc, int cpu)
|
||||
{
|
||||
struct rb_node **node = &root_alloc_stat.rb_node;
|
||||
struct rb_node *parent = NULL;
|
||||
@ -143,8 +152,10 @@ static void insert_alloc_stat(unsigned long call_site, unsigned long ptr,
|
||||
data->bytes_alloc += bytes_alloc;
|
||||
} else {
|
||||
data = malloc(sizeof(*data));
|
||||
if (!data)
|
||||
die("malloc");
|
||||
if (!data) {
|
||||
pr_err("%s: malloc failed\n", __func__);
|
||||
return -1;
|
||||
}
|
||||
data->ptr = ptr;
|
||||
data->pingpong = 0;
|
||||
data->hit = 1;
|
||||
@ -156,9 +167,10 @@ static void insert_alloc_stat(unsigned long call_site, unsigned long ptr,
|
||||
}
|
||||
data->call_site = call_site;
|
||||
data->alloc_cpu = cpu;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void insert_caller_stat(unsigned long call_site,
|
||||
static int insert_caller_stat(unsigned long call_site,
|
||||
int bytes_req, int bytes_alloc)
|
||||
{
|
||||
struct rb_node **node = &root_caller_stat.rb_node;
|
||||
@ -183,8 +195,10 @@ static void insert_caller_stat(unsigned long call_site,
|
||||
data->bytes_alloc += bytes_alloc;
|
||||
} else {
|
||||
data = malloc(sizeof(*data));
|
||||
if (!data)
|
||||
die("malloc");
|
||||
if (!data) {
|
||||
pr_err("%s: malloc failed\n", __func__);
|
||||
return -1;
|
||||
}
|
||||
data->call_site = call_site;
|
||||
data->pingpong = 0;
|
||||
data->hit = 1;
|
||||
@ -194,39 +208,43 @@ static void insert_caller_stat(unsigned long call_site,
|
||||
rb_link_node(&data->node, parent, node);
|
||||
rb_insert_color(&data->node, &root_caller_stat);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void process_alloc_event(void *data,
|
||||
struct event_format *event,
|
||||
int cpu,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used,
|
||||
int node)
|
||||
static int perf_evsel__process_alloc_event(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
unsigned long call_site;
|
||||
unsigned long ptr;
|
||||
int bytes_req;
|
||||
int bytes_alloc;
|
||||
int node1, node2;
|
||||
unsigned long ptr = perf_evsel__intval(evsel, sample, "ptr"),
|
||||
call_site = perf_evsel__intval(evsel, sample, "call_site");
|
||||
int bytes_req = perf_evsel__intval(evsel, sample, "bytes_req"),
|
||||
bytes_alloc = perf_evsel__intval(evsel, sample, "bytes_alloc");
|
||||
|
||||
ptr = raw_field_value(event, "ptr", data);
|
||||
call_site = raw_field_value(event, "call_site", data);
|
||||
bytes_req = raw_field_value(event, "bytes_req", data);
|
||||
bytes_alloc = raw_field_value(event, "bytes_alloc", data);
|
||||
|
||||
insert_alloc_stat(call_site, ptr, bytes_req, bytes_alloc, cpu);
|
||||
insert_caller_stat(call_site, bytes_req, bytes_alloc);
|
||||
if (insert_alloc_stat(call_site, ptr, bytes_req, bytes_alloc, sample->cpu) ||
|
||||
insert_caller_stat(call_site, bytes_req, bytes_alloc))
|
||||
return -1;
|
||||
|
||||
total_requested += bytes_req;
|
||||
total_allocated += bytes_alloc;
|
||||
|
||||
if (node) {
|
||||
node1 = cpunode_map[cpu];
|
||||
node2 = raw_field_value(event, "node", data);
|
||||
nr_allocs++;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int perf_evsel__process_alloc_node_event(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
int ret = perf_evsel__process_alloc_event(evsel, sample);
|
||||
|
||||
if (!ret) {
|
||||
int node1 = cpunode_map[sample->cpu],
|
||||
node2 = perf_evsel__intval(evsel, sample, "node");
|
||||
|
||||
if (node1 != node2)
|
||||
nr_cross_allocs++;
|
||||
}
|
||||
nr_allocs++;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ptr_cmp(struct alloc_stat *, struct alloc_stat *);
|
||||
@ -257,66 +275,37 @@ static struct alloc_stat *search_alloc_stat(unsigned long ptr,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void process_free_event(void *data,
|
||||
struct event_format *event,
|
||||
int cpu,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used)
|
||||
static int perf_evsel__process_free_event(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
unsigned long ptr;
|
||||
unsigned long ptr = perf_evsel__intval(evsel, sample, "ptr");
|
||||
struct alloc_stat *s_alloc, *s_caller;
|
||||
|
||||
ptr = raw_field_value(event, "ptr", data);
|
||||
|
||||
s_alloc = search_alloc_stat(ptr, 0, &root_alloc_stat, ptr_cmp);
|
||||
if (!s_alloc)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
if (cpu != s_alloc->alloc_cpu) {
|
||||
if ((short)sample->cpu != s_alloc->alloc_cpu) {
|
||||
s_alloc->pingpong++;
|
||||
|
||||
s_caller = search_alloc_stat(0, s_alloc->call_site,
|
||||
&root_caller_stat, callsite_cmp);
|
||||
assert(s_caller);
|
||||
if (!s_caller)
|
||||
return -1;
|
||||
s_caller->pingpong++;
|
||||
}
|
||||
s_alloc->alloc_cpu = -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void process_raw_event(struct perf_tool *tool,
|
||||
union perf_event *raw_event __used, void *data,
|
||||
int cpu, u64 timestamp, struct thread *thread)
|
||||
{
|
||||
struct perf_kmem *kmem = container_of(tool, struct perf_kmem, tool);
|
||||
struct event_format *event;
|
||||
int type;
|
||||
typedef int (*tracepoint_handler)(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample);
|
||||
|
||||
type = trace_parse_common_type(kmem->session->pevent, data);
|
||||
event = pevent_find_event(kmem->session->pevent, type);
|
||||
|
||||
if (!strcmp(event->name, "kmalloc") ||
|
||||
!strcmp(event->name, "kmem_cache_alloc")) {
|
||||
process_alloc_event(data, event, cpu, timestamp, thread, 0);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!strcmp(event->name, "kmalloc_node") ||
|
||||
!strcmp(event->name, "kmem_cache_alloc_node")) {
|
||||
process_alloc_event(data, event, cpu, timestamp, thread, 1);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!strcmp(event->name, "kfree") ||
|
||||
!strcmp(event->name, "kmem_cache_free")) {
|
||||
process_free_event(data, event, cpu, timestamp, thread);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
static int process_sample_event(struct perf_tool *tool,
|
||||
static int process_sample_event(struct perf_tool *tool __maybe_unused,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample,
|
||||
struct perf_evsel *evsel __used,
|
||||
struct perf_evsel *evsel,
|
||||
struct machine *machine)
|
||||
{
|
||||
struct thread *thread = machine__findnew_thread(machine, event->ip.pid);
|
||||
@ -329,18 +318,18 @@ static int process_sample_event(struct perf_tool *tool,
|
||||
|
||||
dump_printf(" ... thread: %s:%d\n", thread->comm, thread->pid);
|
||||
|
||||
process_raw_event(tool, event, sample->raw_data, sample->cpu,
|
||||
sample->time, thread);
|
||||
if (evsel->handler.func != NULL) {
|
||||
tracepoint_handler f = evsel->handler.func;
|
||||
return f(evsel, sample);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct perf_kmem perf_kmem = {
|
||||
.tool = {
|
||||
.sample = process_sample_event,
|
||||
.comm = perf_event__process_comm,
|
||||
.ordered_samples = true,
|
||||
},
|
||||
static struct perf_tool perf_kmem = {
|
||||
.sample = process_sample_event,
|
||||
.comm = perf_event__process_comm,
|
||||
.ordered_samples = true,
|
||||
};
|
||||
|
||||
static double fragmentation(unsigned long n_req, unsigned long n_alloc)
|
||||
@ -496,22 +485,32 @@ static int __cmd_kmem(void)
|
||||
{
|
||||
int err = -EINVAL;
|
||||
struct perf_session *session;
|
||||
const struct perf_evsel_str_handler kmem_tracepoints[] = {
|
||||
{ "kmem:kmalloc", perf_evsel__process_alloc_event, },
|
||||
{ "kmem:kmem_cache_alloc", perf_evsel__process_alloc_event, },
|
||||
{ "kmem:kmalloc_node", perf_evsel__process_alloc_node_event, },
|
||||
{ "kmem:kmem_cache_alloc_node", perf_evsel__process_alloc_node_event, },
|
||||
{ "kmem:kfree", perf_evsel__process_free_event, },
|
||||
{ "kmem:kmem_cache_free", perf_evsel__process_free_event, },
|
||||
};
|
||||
|
||||
session = perf_session__new(input_name, O_RDONLY, 0, false,
|
||||
&perf_kmem.tool);
|
||||
session = perf_session__new(input_name, O_RDONLY, 0, false, &perf_kmem);
|
||||
if (session == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
perf_kmem.session = session;
|
||||
|
||||
if (perf_session__create_kernel_maps(session) < 0)
|
||||
goto out_delete;
|
||||
|
||||
if (!perf_session__has_traces(session, "kmem record"))
|
||||
goto out_delete;
|
||||
|
||||
if (perf_session__set_tracepoints_handlers(session, kmem_tracepoints)) {
|
||||
pr_err("Initializing perf session tracepoint handlers failed\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
setup_pager();
|
||||
err = perf_session__process_events(session, &perf_kmem.tool);
|
||||
err = perf_session__process_events(session, &perf_kmem);
|
||||
if (err != 0)
|
||||
goto out_delete;
|
||||
sort_result();
|
||||
@ -635,8 +634,10 @@ static int sort_dimension__add(const char *tok, struct list_head *list)
|
||||
for (i = 0; i < NUM_AVAIL_SORTS; i++) {
|
||||
if (!strcmp(avail_sorts[i]->name, tok)) {
|
||||
sort = malloc(sizeof(*sort));
|
||||
if (!sort)
|
||||
die("malloc");
|
||||
if (!sort) {
|
||||
pr_err("%s: malloc failed\n", __func__);
|
||||
return -1;
|
||||
}
|
||||
memcpy(sort, avail_sorts[i], sizeof(*sort));
|
||||
list_add_tail(&sort->list, list);
|
||||
return 0;
|
||||
@ -651,8 +652,10 @@ static int setup_sorting(struct list_head *sort_list, const char *arg)
|
||||
char *tok;
|
||||
char *str = strdup(arg);
|
||||
|
||||
if (!str)
|
||||
die("strdup");
|
||||
if (!str) {
|
||||
pr_err("%s: strdup failed\n", __func__);
|
||||
return -1;
|
||||
}
|
||||
|
||||
while (true) {
|
||||
tok = strsep(&str, ",");
|
||||
@ -669,8 +672,8 @@ static int setup_sorting(struct list_head *sort_list, const char *arg)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int parse_sort_opt(const struct option *opt __used,
|
||||
const char *arg, int unset __used)
|
||||
static int parse_sort_opt(const struct option *opt __maybe_unused,
|
||||
const char *arg, int unset __maybe_unused)
|
||||
{
|
||||
if (!arg)
|
||||
return -1;
|
||||
@ -683,22 +686,24 @@ static int parse_sort_opt(const struct option *opt __used,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int parse_caller_opt(const struct option *opt __used,
|
||||
const char *arg __used, int unset __used)
|
||||
static int parse_caller_opt(const struct option *opt __maybe_unused,
|
||||
const char *arg __maybe_unused,
|
||||
int unset __maybe_unused)
|
||||
{
|
||||
caller_flag = (alloc_flag + 1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int parse_alloc_opt(const struct option *opt __used,
|
||||
const char *arg __used, int unset __used)
|
||||
static int parse_alloc_opt(const struct option *opt __maybe_unused,
|
||||
const char *arg __maybe_unused,
|
||||
int unset __maybe_unused)
|
||||
{
|
||||
alloc_flag = (caller_flag + 1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int parse_line_opt(const struct option *opt __used,
|
||||
const char *arg, int unset __used)
|
||||
static int parse_line_opt(const struct option *opt __maybe_unused,
|
||||
const char *arg, int unset __maybe_unused)
|
||||
{
|
||||
int lines;
|
||||
|
||||
@ -768,7 +773,7 @@ static int __cmd_record(int argc, const char **argv)
|
||||
return cmd_record(i, rec_argv, NULL);
|
||||
}
|
||||
|
||||
int cmd_kmem(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
argc = parse_options(argc, argv, kmem_options, kmem_usage, 0);
|
||||
|
||||
@ -780,7 +785,8 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __used)
|
||||
if (!strncmp(argv[0], "rec", 3)) {
|
||||
return __cmd_record(argc, argv);
|
||||
} else if (!strcmp(argv[0], "stat")) {
|
||||
setup_cpunode_map();
|
||||
if (setup_cpunode_map())
|
||||
return -1;
|
||||
|
||||
if (list_empty(&caller_sort))
|
||||
setup_sorting(&caller_sort, default_sort_order);
|
||||
|
@ -1,6 +1,7 @@
|
||||
#include "builtin.h"
|
||||
#include "perf.h"
|
||||
|
||||
#include "util/evsel.h"
|
||||
#include "util/util.h"
|
||||
#include "util/cache.h"
|
||||
#include "util/symbol.h"
|
||||
@ -10,8 +11,10 @@
|
||||
|
||||
#include "util/parse-options.h"
|
||||
#include "util/trace-event.h"
|
||||
|
||||
#include "util/debug.h"
|
||||
#include "util/debugfs.h"
|
||||
#include "util/tool.h"
|
||||
#include "util/stat.h"
|
||||
|
||||
#include <sys/prctl.h>
|
||||
|
||||
@ -19,11 +22,836 @@
|
||||
#include <pthread.h>
|
||||
#include <math.h>
|
||||
|
||||
static const char *file_name;
|
||||
#include "../../arch/x86/include/asm/svm.h"
|
||||
#include "../../arch/x86/include/asm/vmx.h"
|
||||
#include "../../arch/x86/include/asm/kvm.h"
|
||||
|
||||
struct event_key {
|
||||
#define INVALID_KEY (~0ULL)
|
||||
u64 key;
|
||||
int info;
|
||||
};
|
||||
|
||||
struct kvm_events_ops {
|
||||
bool (*is_begin_event)(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample,
|
||||
struct event_key *key);
|
||||
bool (*is_end_event)(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample, struct event_key *key);
|
||||
void (*decode_key)(struct event_key *key, char decode[20]);
|
||||
const char *name;
|
||||
};
|
||||
|
||||
static void exit_event_get_key(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample,
|
||||
struct event_key *key)
|
||||
{
|
||||
key->info = 0;
|
||||
key->key = perf_evsel__intval(evsel, sample, "exit_reason");
|
||||
}
|
||||
|
||||
static bool kvm_exit_event(struct perf_evsel *evsel)
|
||||
{
|
||||
return !strcmp(evsel->name, "kvm:kvm_exit");
|
||||
}
|
||||
|
||||
static bool exit_event_begin(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample, struct event_key *key)
|
||||
{
|
||||
if (kvm_exit_event(evsel)) {
|
||||
exit_event_get_key(evsel, sample, key);
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool kvm_entry_event(struct perf_evsel *evsel)
|
||||
{
|
||||
return !strcmp(evsel->name, "kvm:kvm_entry");
|
||||
}
|
||||
|
||||
static bool exit_event_end(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample __maybe_unused,
|
||||
struct event_key *key __maybe_unused)
|
||||
{
|
||||
return kvm_entry_event(evsel);
|
||||
}
|
||||
|
||||
struct exit_reasons_table {
|
||||
unsigned long exit_code;
|
||||
const char *reason;
|
||||
};
|
||||
|
||||
struct exit_reasons_table vmx_exit_reasons[] = {
|
||||
VMX_EXIT_REASONS
|
||||
};
|
||||
|
||||
struct exit_reasons_table svm_exit_reasons[] = {
|
||||
SVM_EXIT_REASONS
|
||||
};
|
||||
|
||||
static int cpu_isa;
|
||||
|
||||
static const char *get_exit_reason(u64 exit_code)
|
||||
{
|
||||
int table_size = ARRAY_SIZE(svm_exit_reasons);
|
||||
struct exit_reasons_table *table = svm_exit_reasons;
|
||||
|
||||
if (cpu_isa == 1) {
|
||||
table = vmx_exit_reasons;
|
||||
table_size = ARRAY_SIZE(vmx_exit_reasons);
|
||||
}
|
||||
|
||||
while (table_size--) {
|
||||
if (table->exit_code == exit_code)
|
||||
return table->reason;
|
||||
table++;
|
||||
}
|
||||
|
||||
pr_err("unknown kvm exit code:%lld on %s\n",
|
||||
(unsigned long long)exit_code, cpu_isa ? "VMX" : "SVM");
|
||||
return "UNKNOWN";
|
||||
}
|
||||
|
||||
static void exit_event_decode_key(struct event_key *key, char decode[20])
|
||||
{
|
||||
const char *exit_reason = get_exit_reason(key->key);
|
||||
|
||||
scnprintf(decode, 20, "%s", exit_reason);
|
||||
}
|
||||
|
||||
static struct kvm_events_ops exit_events = {
|
||||
.is_begin_event = exit_event_begin,
|
||||
.is_end_event = exit_event_end,
|
||||
.decode_key = exit_event_decode_key,
|
||||
.name = "VM-EXIT"
|
||||
};
|
||||
|
||||
/*
|
||||
* For the mmio events, we treat:
|
||||
* the time of MMIO write: kvm_mmio(KVM_TRACE_MMIO_WRITE...) -> kvm_entry
|
||||
* the time of MMIO read: kvm_exit -> kvm_mmio(KVM_TRACE_MMIO_READ...).
|
||||
*/
|
||||
static void mmio_event_get_key(struct perf_evsel *evsel, struct perf_sample *sample,
|
||||
struct event_key *key)
|
||||
{
|
||||
key->key = perf_evsel__intval(evsel, sample, "gpa");
|
||||
key->info = perf_evsel__intval(evsel, sample, "type");
|
||||
}
|
||||
|
||||
#define KVM_TRACE_MMIO_READ_UNSATISFIED 0
|
||||
#define KVM_TRACE_MMIO_READ 1
|
||||
#define KVM_TRACE_MMIO_WRITE 2
|
||||
|
||||
static bool mmio_event_begin(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample, struct event_key *key)
|
||||
{
|
||||
/* MMIO read begin event in kernel. */
|
||||
if (kvm_exit_event(evsel))
|
||||
return true;
|
||||
|
||||
/* MMIO write begin event in kernel. */
|
||||
if (!strcmp(evsel->name, "kvm:kvm_mmio") &&
|
||||
perf_evsel__intval(evsel, sample, "type") == KVM_TRACE_MMIO_WRITE) {
|
||||
mmio_event_get_key(evsel, sample, key);
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool mmio_event_end(struct perf_evsel *evsel, struct perf_sample *sample,
|
||||
struct event_key *key)
|
||||
{
|
||||
/* MMIO write end event in kernel. */
|
||||
if (kvm_entry_event(evsel))
|
||||
return true;
|
||||
|
||||
/* MMIO read end event in kernel.*/
|
||||
if (!strcmp(evsel->name, "kvm:kvm_mmio") &&
|
||||
perf_evsel__intval(evsel, sample, "type") == KVM_TRACE_MMIO_READ) {
|
||||
mmio_event_get_key(evsel, sample, key);
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static void mmio_event_decode_key(struct event_key *key, char decode[20])
|
||||
{
|
||||
scnprintf(decode, 20, "%#lx:%s", (unsigned long)key->key,
|
||||
key->info == KVM_TRACE_MMIO_WRITE ? "W" : "R");
|
||||
}
|
||||
|
||||
static struct kvm_events_ops mmio_events = {
|
||||
.is_begin_event = mmio_event_begin,
|
||||
.is_end_event = mmio_event_end,
|
||||
.decode_key = mmio_event_decode_key,
|
||||
.name = "MMIO Access"
|
||||
};
|
||||
|
||||
/* The time of emulation pio access is from kvm_pio to kvm_entry. */
|
||||
static void ioport_event_get_key(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample,
|
||||
struct event_key *key)
|
||||
{
|
||||
key->key = perf_evsel__intval(evsel, sample, "port");
|
||||
key->info = perf_evsel__intval(evsel, sample, "rw");
|
||||
}
|
||||
|
||||
static bool ioport_event_begin(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample,
|
||||
struct event_key *key)
|
||||
{
|
||||
if (!strcmp(evsel->name, "kvm:kvm_pio")) {
|
||||
ioport_event_get_key(evsel, sample, key);
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool ioport_event_end(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample __maybe_unused,
|
||||
struct event_key *key __maybe_unused)
|
||||
{
|
||||
return kvm_entry_event(evsel);
|
||||
}
|
||||
|
||||
static void ioport_event_decode_key(struct event_key *key, char decode[20])
|
||||
{
|
||||
scnprintf(decode, 20, "%#llx:%s", (unsigned long long)key->key,
|
||||
key->info ? "POUT" : "PIN");
|
||||
}
|
||||
|
||||
static struct kvm_events_ops ioport_events = {
|
||||
.is_begin_event = ioport_event_begin,
|
||||
.is_end_event = ioport_event_end,
|
||||
.decode_key = ioport_event_decode_key,
|
||||
.name = "IO Port Access"
|
||||
};
|
||||
|
||||
static const char *report_event = "vmexit";
|
||||
struct kvm_events_ops *events_ops;
|
||||
|
||||
static bool register_kvm_events_ops(void)
|
||||
{
|
||||
bool ret = true;
|
||||
|
||||
if (!strcmp(report_event, "vmexit"))
|
||||
events_ops = &exit_events;
|
||||
else if (!strcmp(report_event, "mmio"))
|
||||
events_ops = &mmio_events;
|
||||
else if (!strcmp(report_event, "ioport"))
|
||||
events_ops = &ioport_events;
|
||||
else {
|
||||
pr_err("Unknown report event:%s\n", report_event);
|
||||
ret = false;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct kvm_event_stats {
|
||||
u64 time;
|
||||
struct stats stats;
|
||||
};
|
||||
|
||||
struct kvm_event {
|
||||
struct list_head hash_entry;
|
||||
struct rb_node rb;
|
||||
|
||||
struct event_key key;
|
||||
|
||||
struct kvm_event_stats total;
|
||||
|
||||
#define DEFAULT_VCPU_NUM 8
|
||||
int max_vcpu;
|
||||
struct kvm_event_stats *vcpu;
|
||||
};
|
||||
|
||||
struct vcpu_event_record {
|
||||
int vcpu_id;
|
||||
u64 start_time;
|
||||
struct kvm_event *last_event;
|
||||
};
|
||||
|
||||
#define EVENTS_BITS 12
|
||||
#define EVENTS_CACHE_SIZE (1UL << EVENTS_BITS)
|
||||
|
||||
static u64 total_time;
|
||||
static u64 total_count;
|
||||
static struct list_head kvm_events_cache[EVENTS_CACHE_SIZE];
|
||||
|
||||
static void init_kvm_event_record(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < (int)EVENTS_CACHE_SIZE; i++)
|
||||
INIT_LIST_HEAD(&kvm_events_cache[i]);
|
||||
}
|
||||
|
||||
static int kvm_events_hash_fn(u64 key)
|
||||
{
|
||||
return key & (EVENTS_CACHE_SIZE - 1);
|
||||
}
|
||||
|
||||
static bool kvm_event_expand(struct kvm_event *event, int vcpu_id)
|
||||
{
|
||||
int old_max_vcpu = event->max_vcpu;
|
||||
|
||||
if (vcpu_id < event->max_vcpu)
|
||||
return true;
|
||||
|
||||
while (event->max_vcpu <= vcpu_id)
|
||||
event->max_vcpu += DEFAULT_VCPU_NUM;
|
||||
|
||||
event->vcpu = realloc(event->vcpu,
|
||||
event->max_vcpu * sizeof(*event->vcpu));
|
||||
if (!event->vcpu) {
|
||||
pr_err("Not enough memory\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
memset(event->vcpu + old_max_vcpu, 0,
|
||||
(event->max_vcpu - old_max_vcpu) * sizeof(*event->vcpu));
|
||||
return true;
|
||||
}
|
||||
|
||||
static struct kvm_event *kvm_alloc_init_event(struct event_key *key)
|
||||
{
|
||||
struct kvm_event *event;
|
||||
|
||||
event = zalloc(sizeof(*event));
|
||||
if (!event) {
|
||||
pr_err("Not enough memory\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
event->key = *key;
|
||||
return event;
|
||||
}
|
||||
|
||||
static struct kvm_event *find_create_kvm_event(struct event_key *key)
|
||||
{
|
||||
struct kvm_event *event;
|
||||
struct list_head *head;
|
||||
|
||||
BUG_ON(key->key == INVALID_KEY);
|
||||
|
||||
head = &kvm_events_cache[kvm_events_hash_fn(key->key)];
|
||||
list_for_each_entry(event, head, hash_entry)
|
||||
if (event->key.key == key->key && event->key.info == key->info)
|
||||
return event;
|
||||
|
||||
event = kvm_alloc_init_event(key);
|
||||
if (!event)
|
||||
return NULL;
|
||||
|
||||
list_add(&event->hash_entry, head);
|
||||
return event;
|
||||
}
|
||||
|
||||
static bool handle_begin_event(struct vcpu_event_record *vcpu_record,
|
||||
struct event_key *key, u64 timestamp)
|
||||
{
|
||||
struct kvm_event *event = NULL;
|
||||
|
||||
if (key->key != INVALID_KEY)
|
||||
event = find_create_kvm_event(key);
|
||||
|
||||
vcpu_record->last_event = event;
|
||||
vcpu_record->start_time = timestamp;
|
||||
return true;
|
||||
}
|
||||
|
||||
static void
|
||||
kvm_update_event_stats(struct kvm_event_stats *kvm_stats, u64 time_diff)
|
||||
{
|
||||
kvm_stats->time += time_diff;
|
||||
update_stats(&kvm_stats->stats, time_diff);
|
||||
}
|
||||
|
||||
static double kvm_event_rel_stddev(int vcpu_id, struct kvm_event *event)
|
||||
{
|
||||
struct kvm_event_stats *kvm_stats = &event->total;
|
||||
|
||||
if (vcpu_id != -1)
|
||||
kvm_stats = &event->vcpu[vcpu_id];
|
||||
|
||||
return rel_stddev_stats(stddev_stats(&kvm_stats->stats),
|
||||
avg_stats(&kvm_stats->stats));
|
||||
}
|
||||
|
||||
static bool update_kvm_event(struct kvm_event *event, int vcpu_id,
|
||||
u64 time_diff)
|
||||
{
|
||||
kvm_update_event_stats(&event->total, time_diff);
|
||||
|
||||
if (!kvm_event_expand(event, vcpu_id))
|
||||
return false;
|
||||
|
||||
kvm_update_event_stats(&event->vcpu[vcpu_id], time_diff);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool handle_end_event(struct vcpu_event_record *vcpu_record,
|
||||
struct event_key *key, u64 timestamp)
|
||||
{
|
||||
struct kvm_event *event;
|
||||
u64 time_begin, time_diff;
|
||||
|
||||
event = vcpu_record->last_event;
|
||||
time_begin = vcpu_record->start_time;
|
||||
|
||||
/* The begin event is not caught. */
|
||||
if (!time_begin)
|
||||
return true;
|
||||
|
||||
/*
|
||||
* In some case, the 'begin event' only records the start timestamp,
|
||||
* the actual event is recognized in the 'end event' (e.g. mmio-event).
|
||||
*/
|
||||
|
||||
/* Both begin and end events did not get the key. */
|
||||
if (!event && key->key == INVALID_KEY)
|
||||
return true;
|
||||
|
||||
if (!event)
|
||||
event = find_create_kvm_event(key);
|
||||
|
||||
if (!event)
|
||||
return false;
|
||||
|
||||
vcpu_record->last_event = NULL;
|
||||
vcpu_record->start_time = 0;
|
||||
|
||||
BUG_ON(timestamp < time_begin);
|
||||
|
||||
time_diff = timestamp - time_begin;
|
||||
return update_kvm_event(event, vcpu_record->vcpu_id, time_diff);
|
||||
}
|
||||
|
||||
static
|
||||
struct vcpu_event_record *per_vcpu_record(struct thread *thread,
|
||||
struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
/* Only kvm_entry records vcpu id. */
|
||||
if (!thread->priv && kvm_entry_event(evsel)) {
|
||||
struct vcpu_event_record *vcpu_record;
|
||||
|
||||
vcpu_record = zalloc(sizeof(*vcpu_record));
|
||||
if (!vcpu_record) {
|
||||
pr_err("%s: Not enough memory\n", __func__);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
vcpu_record->vcpu_id = perf_evsel__intval(evsel, sample, "vcpu_id");
|
||||
thread->priv = vcpu_record;
|
||||
}
|
||||
|
||||
return thread->priv;
|
||||
}
|
||||
|
||||
static bool handle_kvm_event(struct thread *thread, struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
struct vcpu_event_record *vcpu_record;
|
||||
struct event_key key = {.key = INVALID_KEY};
|
||||
|
||||
vcpu_record = per_vcpu_record(thread, evsel, sample);
|
||||
if (!vcpu_record)
|
||||
return true;
|
||||
|
||||
if (events_ops->is_begin_event(evsel, sample, &key))
|
||||
return handle_begin_event(vcpu_record, &key, sample->time);
|
||||
|
||||
if (events_ops->is_end_event(evsel, sample, &key))
|
||||
return handle_end_event(vcpu_record, &key, sample->time);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
typedef int (*key_cmp_fun)(struct kvm_event*, struct kvm_event*, int);
|
||||
struct kvm_event_key {
|
||||
const char *name;
|
||||
key_cmp_fun key;
|
||||
};
|
||||
|
||||
static int trace_vcpu = -1;
|
||||
#define GET_EVENT_KEY(func, field) \
|
||||
static u64 get_event_ ##func(struct kvm_event *event, int vcpu) \
|
||||
{ \
|
||||
if (vcpu == -1) \
|
||||
return event->total.field; \
|
||||
\
|
||||
if (vcpu >= event->max_vcpu) \
|
||||
return 0; \
|
||||
\
|
||||
return event->vcpu[vcpu].field; \
|
||||
}
|
||||
|
||||
#define COMPARE_EVENT_KEY(func, field) \
|
||||
GET_EVENT_KEY(func, field) \
|
||||
static int compare_kvm_event_ ## func(struct kvm_event *one, \
|
||||
struct kvm_event *two, int vcpu)\
|
||||
{ \
|
||||
return get_event_ ##func(one, vcpu) > \
|
||||
get_event_ ##func(two, vcpu); \
|
||||
}
|
||||
|
||||
GET_EVENT_KEY(time, time);
|
||||
COMPARE_EVENT_KEY(count, stats.n);
|
||||
COMPARE_EVENT_KEY(mean, stats.mean);
|
||||
|
||||
#define DEF_SORT_NAME_KEY(name, compare_key) \
|
||||
{ #name, compare_kvm_event_ ## compare_key }
|
||||
|
||||
static struct kvm_event_key keys[] = {
|
||||
DEF_SORT_NAME_KEY(sample, count),
|
||||
DEF_SORT_NAME_KEY(time, mean),
|
||||
{ NULL, NULL }
|
||||
};
|
||||
|
||||
static const char *sort_key = "sample";
|
||||
static key_cmp_fun compare;
|
||||
|
||||
static bool select_key(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; keys[i].name; i++) {
|
||||
if (!strcmp(keys[i].name, sort_key)) {
|
||||
compare = keys[i].key;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
pr_err("Unknown compare key:%s\n", sort_key);
|
||||
return false;
|
||||
}
|
||||
|
||||
static struct rb_root result;
|
||||
static void insert_to_result(struct kvm_event *event, key_cmp_fun bigger,
|
||||
int vcpu)
|
||||
{
|
||||
struct rb_node **rb = &result.rb_node;
|
||||
struct rb_node *parent = NULL;
|
||||
struct kvm_event *p;
|
||||
|
||||
while (*rb) {
|
||||
p = container_of(*rb, struct kvm_event, rb);
|
||||
parent = *rb;
|
||||
|
||||
if (bigger(event, p, vcpu))
|
||||
rb = &(*rb)->rb_left;
|
||||
else
|
||||
rb = &(*rb)->rb_right;
|
||||
}
|
||||
|
||||
rb_link_node(&event->rb, parent, rb);
|
||||
rb_insert_color(&event->rb, &result);
|
||||
}
|
||||
|
||||
static void update_total_count(struct kvm_event *event, int vcpu)
|
||||
{
|
||||
total_count += get_event_count(event, vcpu);
|
||||
total_time += get_event_time(event, vcpu);
|
||||
}
|
||||
|
||||
static bool event_is_valid(struct kvm_event *event, int vcpu)
|
||||
{
|
||||
return !!get_event_count(event, vcpu);
|
||||
}
|
||||
|
||||
static void sort_result(int vcpu)
|
||||
{
|
||||
unsigned int i;
|
||||
struct kvm_event *event;
|
||||
|
||||
for (i = 0; i < EVENTS_CACHE_SIZE; i++)
|
||||
list_for_each_entry(event, &kvm_events_cache[i], hash_entry)
|
||||
if (event_is_valid(event, vcpu)) {
|
||||
update_total_count(event, vcpu);
|
||||
insert_to_result(event, compare, vcpu);
|
||||
}
|
||||
}
|
||||
|
||||
/* returns left most element of result, and erase it */
|
||||
static struct kvm_event *pop_from_result(void)
|
||||
{
|
||||
struct rb_node *node = rb_first(&result);
|
||||
|
||||
if (!node)
|
||||
return NULL;
|
||||
|
||||
rb_erase(node, &result);
|
||||
return container_of(node, struct kvm_event, rb);
|
||||
}
|
||||
|
||||
static void print_vcpu_info(int vcpu)
|
||||
{
|
||||
pr_info("Analyze events for ");
|
||||
|
||||
if (vcpu == -1)
|
||||
pr_info("all VCPUs:\n\n");
|
||||
else
|
||||
pr_info("VCPU %d:\n\n", vcpu);
|
||||
}
|
||||
|
||||
static void print_result(int vcpu)
|
||||
{
|
||||
char decode[20];
|
||||
struct kvm_event *event;
|
||||
|
||||
pr_info("\n\n");
|
||||
print_vcpu_info(vcpu);
|
||||
pr_info("%20s ", events_ops->name);
|
||||
pr_info("%10s ", "Samples");
|
||||
pr_info("%9s ", "Samples%");
|
||||
|
||||
pr_info("%9s ", "Time%");
|
||||
pr_info("%16s ", "Avg time");
|
||||
pr_info("\n\n");
|
||||
|
||||
while ((event = pop_from_result())) {
|
||||
u64 ecount, etime;
|
||||
|
||||
ecount = get_event_count(event, vcpu);
|
||||
etime = get_event_time(event, vcpu);
|
||||
|
||||
events_ops->decode_key(&event->key, decode);
|
||||
pr_info("%20s ", decode);
|
||||
pr_info("%10llu ", (unsigned long long)ecount);
|
||||
pr_info("%8.2f%% ", (double)ecount / total_count * 100);
|
||||
pr_info("%8.2f%% ", (double)etime / total_time * 100);
|
||||
pr_info("%9.2fus ( +-%7.2f%% )", (double)etime / ecount/1e3,
|
||||
kvm_event_rel_stddev(vcpu, event));
|
||||
pr_info("\n");
|
||||
}
|
||||
|
||||
pr_info("\nTotal Samples:%lld, Total events handled time:%.2fus.\n\n",
|
||||
(unsigned long long)total_count, total_time / 1e3);
|
||||
}
|
||||
|
||||
static int process_sample_event(struct perf_tool *tool __maybe_unused,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample,
|
||||
struct perf_evsel *evsel,
|
||||
struct machine *machine)
|
||||
{
|
||||
struct thread *thread = machine__findnew_thread(machine, sample->tid);
|
||||
|
||||
if (thread == NULL) {
|
||||
pr_debug("problem processing %d event, skipping it.\n",
|
||||
event->header.type);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (!handle_kvm_event(thread, evsel, sample))
|
||||
return -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct perf_tool eops = {
|
||||
.sample = process_sample_event,
|
||||
.comm = perf_event__process_comm,
|
||||
.ordered_samples = true,
|
||||
};
|
||||
|
||||
static int get_cpu_isa(struct perf_session *session)
|
||||
{
|
||||
char *cpuid = session->header.env.cpuid;
|
||||
int isa;
|
||||
|
||||
if (strstr(cpuid, "Intel"))
|
||||
isa = 1;
|
||||
else if (strstr(cpuid, "AMD"))
|
||||
isa = 0;
|
||||
else {
|
||||
pr_err("CPU %s is not supported.\n", cpuid);
|
||||
isa = -ENOTSUP;
|
||||
}
|
||||
|
||||
return isa;
|
||||
}
|
||||
|
||||
static const char *file_name;
|
||||
|
||||
static int read_events(void)
|
||||
{
|
||||
struct perf_session *kvm_session;
|
||||
int ret;
|
||||
|
||||
kvm_session = perf_session__new(file_name, O_RDONLY, 0, false, &eops);
|
||||
if (!kvm_session) {
|
||||
pr_err("Initializing perf session failed\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!perf_session__has_traces(kvm_session, "kvm record"))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Do not use 'isa' recorded in kvm_exit tracepoint since it is not
|
||||
* traced in the old kernel.
|
||||
*/
|
||||
ret = get_cpu_isa(kvm_session);
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
cpu_isa = ret;
|
||||
|
||||
return perf_session__process_events(kvm_session, &eops);
|
||||
}
|
||||
|
||||
static bool verify_vcpu(int vcpu)
|
||||
{
|
||||
if (vcpu != -1 && vcpu < 0) {
|
||||
pr_err("Invalid vcpu:%d.\n", vcpu);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static int kvm_events_report_vcpu(int vcpu)
|
||||
{
|
||||
int ret = -EINVAL;
|
||||
|
||||
if (!verify_vcpu(vcpu))
|
||||
goto exit;
|
||||
|
||||
if (!select_key())
|
||||
goto exit;
|
||||
|
||||
if (!register_kvm_events_ops())
|
||||
goto exit;
|
||||
|
||||
init_kvm_event_record();
|
||||
setup_pager();
|
||||
|
||||
ret = read_events();
|
||||
if (ret)
|
||||
goto exit;
|
||||
|
||||
sort_result(vcpu);
|
||||
print_result(vcpu);
|
||||
exit:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const char * const record_args[] = {
|
||||
"record",
|
||||
"-R",
|
||||
"-f",
|
||||
"-m", "1024",
|
||||
"-c", "1",
|
||||
"-e", "kvm:kvm_entry",
|
||||
"-e", "kvm:kvm_exit",
|
||||
"-e", "kvm:kvm_mmio",
|
||||
"-e", "kvm:kvm_pio",
|
||||
};
|
||||
|
||||
#define STRDUP_FAIL_EXIT(s) \
|
||||
({ char *_p; \
|
||||
_p = strdup(s); \
|
||||
if (!_p) \
|
||||
return -ENOMEM; \
|
||||
_p; \
|
||||
})
|
||||
|
||||
static int kvm_events_record(int argc, const char **argv)
|
||||
{
|
||||
unsigned int rec_argc, i, j;
|
||||
const char **rec_argv;
|
||||
|
||||
rec_argc = ARRAY_SIZE(record_args) + argc + 2;
|
||||
rec_argv = calloc(rec_argc + 1, sizeof(char *));
|
||||
|
||||
if (rec_argv == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(record_args); i++)
|
||||
rec_argv[i] = STRDUP_FAIL_EXIT(record_args[i]);
|
||||
|
||||
rec_argv[i++] = STRDUP_FAIL_EXIT("-o");
|
||||
rec_argv[i++] = STRDUP_FAIL_EXIT(file_name);
|
||||
|
||||
for (j = 1; j < (unsigned int)argc; j++, i++)
|
||||
rec_argv[i] = argv[j];
|
||||
|
||||
return cmd_record(i, rec_argv, NULL);
|
||||
}
|
||||
|
||||
static const char * const kvm_events_report_usage[] = {
|
||||
"perf kvm stat report [<options>]",
|
||||
NULL
|
||||
};
|
||||
|
||||
static const struct option kvm_events_report_options[] = {
|
||||
OPT_STRING(0, "event", &report_event, "report event",
|
||||
"event for reporting: vmexit, mmio, ioport"),
|
||||
OPT_INTEGER(0, "vcpu", &trace_vcpu,
|
||||
"vcpu id to report"),
|
||||
OPT_STRING('k', "key", &sort_key, "sort-key",
|
||||
"key for sorting: sample(sort by samples number)"
|
||||
" time (sort by avg time)"),
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
static int kvm_events_report(int argc, const char **argv)
|
||||
{
|
||||
symbol__init();
|
||||
|
||||
if (argc) {
|
||||
argc = parse_options(argc, argv,
|
||||
kvm_events_report_options,
|
||||
kvm_events_report_usage, 0);
|
||||
if (argc)
|
||||
usage_with_options(kvm_events_report_usage,
|
||||
kvm_events_report_options);
|
||||
}
|
||||
|
||||
return kvm_events_report_vcpu(trace_vcpu);
|
||||
}
|
||||
|
||||
static void print_kvm_stat_usage(void)
|
||||
{
|
||||
printf("Usage: perf kvm stat <command>\n\n");
|
||||
|
||||
printf("# Available commands:\n");
|
||||
printf("\trecord: record kvm events\n");
|
||||
printf("\treport: report statistical data of kvm events\n");
|
||||
|
||||
printf("\nOtherwise, it is the alias of 'perf stat':\n");
|
||||
}
|
||||
|
||||
static int kvm_cmd_stat(int argc, const char **argv)
|
||||
{
|
||||
if (argc == 1) {
|
||||
print_kvm_stat_usage();
|
||||
goto perf_stat;
|
||||
}
|
||||
|
||||
if (!strncmp(argv[1], "rec", 3))
|
||||
return kvm_events_record(argc - 1, argv + 1);
|
||||
|
||||
if (!strncmp(argv[1], "rep", 3))
|
||||
return kvm_events_report(argc - 1 , argv + 1);
|
||||
|
||||
perf_stat:
|
||||
return cmd_stat(argc, argv, NULL);
|
||||
}
|
||||
|
||||
static char name_buffer[256];
|
||||
|
||||
static const char * const kvm_usage[] = {
|
||||
"perf kvm [<options>] {top|record|report|diff|buildid-list}",
|
||||
"perf kvm [<options>] {top|record|report|diff|buildid-list|stat}",
|
||||
NULL
|
||||
};
|
||||
|
||||
@ -102,7 +930,7 @@ static int __cmd_buildid_list(int argc, const char **argv)
|
||||
return cmd_buildid_list(i, rec_argv, NULL);
|
||||
}
|
||||
|
||||
int cmd_kvm(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_kvm(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
perf_host = 0;
|
||||
perf_guest = 1;
|
||||
@ -135,6 +963,8 @@ int cmd_kvm(int argc, const char **argv, const char *prefix __used)
|
||||
return cmd_top(argc, argv, NULL);
|
||||
else if (!strncmp(argv[0], "buildid-list", 12))
|
||||
return __cmd_buildid_list(argc, argv);
|
||||
else if (!strncmp(argv[0], "stat", 4))
|
||||
return kvm_cmd_stat(argc, argv);
|
||||
else
|
||||
usage_with_options(kvm_usage, kvm_options);
|
||||
|
||||
|
@ -14,20 +14,20 @@
|
||||
#include "util/parse-events.h"
|
||||
#include "util/cache.h"
|
||||
|
||||
int cmd_list(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_list(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
setup_pager();
|
||||
|
||||
if (argc == 1)
|
||||
print_events(NULL);
|
||||
print_events(NULL, false);
|
||||
else {
|
||||
int i;
|
||||
|
||||
for (i = 1; i < argc; ++i) {
|
||||
if (i > 1)
|
||||
if (i > 2)
|
||||
putchar('\n');
|
||||
if (strncmp(argv[i], "tracepoint", 10) == 0)
|
||||
print_tracepoint_events(NULL, NULL);
|
||||
print_tracepoint_events(NULL, NULL, false);
|
||||
else if (strcmp(argv[i], "hw") == 0 ||
|
||||
strcmp(argv[i], "hardware") == 0)
|
||||
print_events_type(PERF_TYPE_HARDWARE);
|
||||
@ -36,13 +36,15 @@ int cmd_list(int argc, const char **argv, const char *prefix __used)
|
||||
print_events_type(PERF_TYPE_SOFTWARE);
|
||||
else if (strcmp(argv[i], "cache") == 0 ||
|
||||
strcmp(argv[i], "hwcache") == 0)
|
||||
print_hwcache_events(NULL);
|
||||
print_hwcache_events(NULL, false);
|
||||
else if (strcmp(argv[i], "--raw-dump") == 0)
|
||||
print_events(NULL, true);
|
||||
else {
|
||||
char *sep = strchr(argv[i], ':'), *s;
|
||||
int sep_idx;
|
||||
|
||||
if (sep == NULL) {
|
||||
print_events(argv[i]);
|
||||
print_events(argv[i], false);
|
||||
continue;
|
||||
}
|
||||
sep_idx = sep - argv[i];
|
||||
@ -51,7 +53,7 @@ int cmd_list(int argc, const char **argv, const char *prefix __used)
|
||||
return -1;
|
||||
|
||||
s[sep_idx] = '\0';
|
||||
print_tracepoint_events(s, s + sep_idx + 1);
|
||||
print_tracepoint_events(s, s + sep_idx + 1, false);
|
||||
free(s);
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,8 @@
|
||||
#include "builtin.h"
|
||||
#include "perf.h"
|
||||
|
||||
#include "util/evlist.h"
|
||||
#include "util/evsel.h"
|
||||
#include "util/util.h"
|
||||
#include "util/cache.h"
|
||||
#include "util/symbol.h"
|
||||
@ -40,7 +42,7 @@ struct lock_stat {
|
||||
struct rb_node rb; /* used for sorting */
|
||||
|
||||
/*
|
||||
* FIXME: raw_field_value() returns unsigned long long,
|
||||
* FIXME: perf_evsel__intval() returns u64,
|
||||
* so address of lockdep_map should be dealed as 64bit.
|
||||
* Is there more better solution?
|
||||
*/
|
||||
@ -160,8 +162,10 @@ static struct thread_stat *thread_stat_findnew_after_first(u32 tid)
|
||||
return st;
|
||||
|
||||
st = zalloc(sizeof(struct thread_stat));
|
||||
if (!st)
|
||||
die("memory allocation failed\n");
|
||||
if (!st) {
|
||||
pr_err("memory allocation failed\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
st->tid = tid;
|
||||
INIT_LIST_HEAD(&st->seq_list);
|
||||
@ -180,8 +184,10 @@ static struct thread_stat *thread_stat_findnew_first(u32 tid)
|
||||
struct thread_stat *st;
|
||||
|
||||
st = zalloc(sizeof(struct thread_stat));
|
||||
if (!st)
|
||||
die("memory allocation failed\n");
|
||||
if (!st) {
|
||||
pr_err("memory allocation failed\n");
|
||||
return NULL;
|
||||
}
|
||||
st->tid = tid;
|
||||
INIT_LIST_HEAD(&st->seq_list);
|
||||
|
||||
@ -247,18 +253,20 @@ struct lock_key keys[] = {
|
||||
{ NULL, NULL }
|
||||
};
|
||||
|
||||
static void select_key(void)
|
||||
static int select_key(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; keys[i].name; i++) {
|
||||
if (!strcmp(keys[i].name, sort_key)) {
|
||||
compare = keys[i].key;
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
die("Unknown compare key:%s\n", sort_key);
|
||||
pr_err("Unknown compare key: %s\n", sort_key);
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
static void insert_to_result(struct lock_stat *st,
|
||||
@ -323,61 +331,24 @@ static struct lock_stat *lock_stat_findnew(void *addr, const char *name)
|
||||
return new;
|
||||
|
||||
alloc_failed:
|
||||
die("memory allocation failed\n");
|
||||
pr_err("memory allocation failed\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static const char *input_name;
|
||||
|
||||
struct raw_event_sample {
|
||||
u32 size;
|
||||
char data[0];
|
||||
};
|
||||
|
||||
struct trace_acquire_event {
|
||||
void *addr;
|
||||
const char *name;
|
||||
int flag;
|
||||
};
|
||||
|
||||
struct trace_acquired_event {
|
||||
void *addr;
|
||||
const char *name;
|
||||
};
|
||||
|
||||
struct trace_contended_event {
|
||||
void *addr;
|
||||
const char *name;
|
||||
};
|
||||
|
||||
struct trace_release_event {
|
||||
void *addr;
|
||||
const char *name;
|
||||
};
|
||||
|
||||
struct trace_lock_handler {
|
||||
void (*acquire_event)(struct trace_acquire_event *,
|
||||
struct event_format *,
|
||||
int cpu,
|
||||
u64 timestamp,
|
||||
struct thread *thread);
|
||||
int (*acquire_event)(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample);
|
||||
|
||||
void (*acquired_event)(struct trace_acquired_event *,
|
||||
struct event_format *,
|
||||
int cpu,
|
||||
u64 timestamp,
|
||||
struct thread *thread);
|
||||
int (*acquired_event)(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample);
|
||||
|
||||
void (*contended_event)(struct trace_contended_event *,
|
||||
struct event_format *,
|
||||
int cpu,
|
||||
u64 timestamp,
|
||||
struct thread *thread);
|
||||
int (*contended_event)(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample);
|
||||
|
||||
void (*release_event)(struct trace_release_event *,
|
||||
struct event_format *,
|
||||
int cpu,
|
||||
u64 timestamp,
|
||||
struct thread *thread);
|
||||
int (*release_event)(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample);
|
||||
};
|
||||
|
||||
static struct lock_seq_stat *get_seq(struct thread_stat *ts, void *addr)
|
||||
@ -390,8 +361,10 @@ static struct lock_seq_stat *get_seq(struct thread_stat *ts, void *addr)
|
||||
}
|
||||
|
||||
seq = zalloc(sizeof(struct lock_seq_stat));
|
||||
if (!seq)
|
||||
die("Not enough memory\n");
|
||||
if (!seq) {
|
||||
pr_err("memory allocation failed\n");
|
||||
return NULL;
|
||||
}
|
||||
seq->state = SEQ_STATE_UNINITIALIZED;
|
||||
seq->addr = addr;
|
||||
|
||||
@ -414,33 +387,42 @@ enum acquire_flags {
|
||||
READ_LOCK = 2,
|
||||
};
|
||||
|
||||
static void
|
||||
report_lock_acquire_event(struct trace_acquire_event *acquire_event,
|
||||
struct event_format *__event __used,
|
||||
int cpu __used,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used)
|
||||
static int report_lock_acquire_event(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
void *addr;
|
||||
struct lock_stat *ls;
|
||||
struct thread_stat *ts;
|
||||
struct lock_seq_stat *seq;
|
||||
const char *name = perf_evsel__strval(evsel, sample, "name");
|
||||
u64 tmp = perf_evsel__intval(evsel, sample, "lockdep_addr");
|
||||
int flag = perf_evsel__intval(evsel, sample, "flag");
|
||||
|
||||
ls = lock_stat_findnew(acquire_event->addr, acquire_event->name);
|
||||
memcpy(&addr, &tmp, sizeof(void *));
|
||||
|
||||
ls = lock_stat_findnew(addr, name);
|
||||
if (!ls)
|
||||
return -1;
|
||||
if (ls->discard)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
ts = thread_stat_findnew(thread->pid);
|
||||
seq = get_seq(ts, acquire_event->addr);
|
||||
ts = thread_stat_findnew(sample->tid);
|
||||
if (!ts)
|
||||
return -1;
|
||||
|
||||
seq = get_seq(ts, addr);
|
||||
if (!seq)
|
||||
return -1;
|
||||
|
||||
switch (seq->state) {
|
||||
case SEQ_STATE_UNINITIALIZED:
|
||||
case SEQ_STATE_RELEASED:
|
||||
if (!acquire_event->flag) {
|
||||
if (!flag) {
|
||||
seq->state = SEQ_STATE_ACQUIRING;
|
||||
} else {
|
||||
if (acquire_event->flag & TRY_LOCK)
|
||||
if (flag & TRY_LOCK)
|
||||
ls->nr_trylock++;
|
||||
if (acquire_event->flag & READ_LOCK)
|
||||
if (flag & READ_LOCK)
|
||||
ls->nr_readlock++;
|
||||
seq->state = SEQ_STATE_READ_ACQUIRED;
|
||||
seq->read_count = 1;
|
||||
@ -448,7 +430,7 @@ report_lock_acquire_event(struct trace_acquire_event *acquire_event,
|
||||
}
|
||||
break;
|
||||
case SEQ_STATE_READ_ACQUIRED:
|
||||
if (acquire_event->flag & READ_LOCK) {
|
||||
if (flag & READ_LOCK) {
|
||||
seq->read_count++;
|
||||
ls->nr_acquired++;
|
||||
goto end;
|
||||
@ -473,38 +455,46 @@ broken:
|
||||
}
|
||||
|
||||
ls->nr_acquire++;
|
||||
seq->prev_event_time = timestamp;
|
||||
seq->prev_event_time = sample->time;
|
||||
end:
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
report_lock_acquired_event(struct trace_acquired_event *acquired_event,
|
||||
struct event_format *__event __used,
|
||||
int cpu __used,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used)
|
||||
static int report_lock_acquired_event(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
void *addr;
|
||||
struct lock_stat *ls;
|
||||
struct thread_stat *ts;
|
||||
struct lock_seq_stat *seq;
|
||||
u64 contended_term;
|
||||
const char *name = perf_evsel__strval(evsel, sample, "name");
|
||||
u64 tmp = perf_evsel__intval(evsel, sample, "lockdep_addr");
|
||||
|
||||
ls = lock_stat_findnew(acquired_event->addr, acquired_event->name);
|
||||
memcpy(&addr, &tmp, sizeof(void *));
|
||||
|
||||
ls = lock_stat_findnew(addr, name);
|
||||
if (!ls)
|
||||
return -1;
|
||||
if (ls->discard)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
ts = thread_stat_findnew(thread->pid);
|
||||
seq = get_seq(ts, acquired_event->addr);
|
||||
ts = thread_stat_findnew(sample->tid);
|
||||
if (!ts)
|
||||
return -1;
|
||||
|
||||
seq = get_seq(ts, addr);
|
||||
if (!seq)
|
||||
return -1;
|
||||
|
||||
switch (seq->state) {
|
||||
case SEQ_STATE_UNINITIALIZED:
|
||||
/* orphan event, do nothing */
|
||||
return;
|
||||
return 0;
|
||||
case SEQ_STATE_ACQUIRING:
|
||||
break;
|
||||
case SEQ_STATE_CONTENDED:
|
||||
contended_term = timestamp - seq->prev_event_time;
|
||||
contended_term = sample->time - seq->prev_event_time;
|
||||
ls->wait_time_total += contended_term;
|
||||
if (contended_term < ls->wait_time_min)
|
||||
ls->wait_time_min = contended_term;
|
||||
@ -529,33 +519,41 @@ report_lock_acquired_event(struct trace_acquired_event *acquired_event,
|
||||
|
||||
seq->state = SEQ_STATE_ACQUIRED;
|
||||
ls->nr_acquired++;
|
||||
seq->prev_event_time = timestamp;
|
||||
seq->prev_event_time = sample->time;
|
||||
end:
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
report_lock_contended_event(struct trace_contended_event *contended_event,
|
||||
struct event_format *__event __used,
|
||||
int cpu __used,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used)
|
||||
static int report_lock_contended_event(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
void *addr;
|
||||
struct lock_stat *ls;
|
||||
struct thread_stat *ts;
|
||||
struct lock_seq_stat *seq;
|
||||
const char *name = perf_evsel__strval(evsel, sample, "name");
|
||||
u64 tmp = perf_evsel__intval(evsel, sample, "lockdep_addr");
|
||||
|
||||
ls = lock_stat_findnew(contended_event->addr, contended_event->name);
|
||||
memcpy(&addr, &tmp, sizeof(void *));
|
||||
|
||||
ls = lock_stat_findnew(addr, name);
|
||||
if (!ls)
|
||||
return -1;
|
||||
if (ls->discard)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
ts = thread_stat_findnew(thread->pid);
|
||||
seq = get_seq(ts, contended_event->addr);
|
||||
ts = thread_stat_findnew(sample->tid);
|
||||
if (!ts)
|
||||
return -1;
|
||||
|
||||
seq = get_seq(ts, addr);
|
||||
if (!seq)
|
||||
return -1;
|
||||
|
||||
switch (seq->state) {
|
||||
case SEQ_STATE_UNINITIALIZED:
|
||||
/* orphan event, do nothing */
|
||||
return;
|
||||
return 0;
|
||||
case SEQ_STATE_ACQUIRING:
|
||||
break;
|
||||
case SEQ_STATE_RELEASED:
|
||||
@ -576,28 +574,36 @@ report_lock_contended_event(struct trace_contended_event *contended_event,
|
||||
|
||||
seq->state = SEQ_STATE_CONTENDED;
|
||||
ls->nr_contended++;
|
||||
seq->prev_event_time = timestamp;
|
||||
seq->prev_event_time = sample->time;
|
||||
end:
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
report_lock_release_event(struct trace_release_event *release_event,
|
||||
struct event_format *__event __used,
|
||||
int cpu __used,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used)
|
||||
static int report_lock_release_event(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
void *addr;
|
||||
struct lock_stat *ls;
|
||||
struct thread_stat *ts;
|
||||
struct lock_seq_stat *seq;
|
||||
const char *name = perf_evsel__strval(evsel, sample, "name");
|
||||
u64 tmp = perf_evsel__intval(evsel, sample, "lockdep_addr");
|
||||
|
||||
ls = lock_stat_findnew(release_event->addr, release_event->name);
|
||||
memcpy(&addr, &tmp, sizeof(void *));
|
||||
|
||||
ls = lock_stat_findnew(addr, name);
|
||||
if (!ls)
|
||||
return -1;
|
||||
if (ls->discard)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
ts = thread_stat_findnew(thread->pid);
|
||||
seq = get_seq(ts, release_event->addr);
|
||||
ts = thread_stat_findnew(sample->tid);
|
||||
if (!ts)
|
||||
return -1;
|
||||
|
||||
seq = get_seq(ts, addr);
|
||||
if (!seq)
|
||||
return -1;
|
||||
|
||||
switch (seq->state) {
|
||||
case SEQ_STATE_UNINITIALIZED:
|
||||
@ -631,7 +637,7 @@ free_seq:
|
||||
list_del(&seq->list);
|
||||
free(seq);
|
||||
end:
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* lock oriented handlers */
|
||||
@ -645,96 +651,36 @@ static struct trace_lock_handler report_lock_ops = {
|
||||
|
||||
static struct trace_lock_handler *trace_handler;
|
||||
|
||||
static void
|
||||
process_lock_acquire_event(void *data,
|
||||
struct event_format *event __used,
|
||||
int cpu __used,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used)
|
||||
static int perf_evsel__process_lock_acquire(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
struct trace_acquire_event acquire_event;
|
||||
u64 tmp; /* this is required for casting... */
|
||||
|
||||
tmp = raw_field_value(event, "lockdep_addr", data);
|
||||
memcpy(&acquire_event.addr, &tmp, sizeof(void *));
|
||||
acquire_event.name = (char *)raw_field_ptr(event, "name", data);
|
||||
acquire_event.flag = (int)raw_field_value(event, "flag", data);
|
||||
|
||||
if (trace_handler->acquire_event)
|
||||
trace_handler->acquire_event(&acquire_event, event, cpu, timestamp, thread);
|
||||
return trace_handler->acquire_event(evsel, sample);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
process_lock_acquired_event(void *data,
|
||||
struct event_format *event __used,
|
||||
int cpu __used,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used)
|
||||
static int perf_evsel__process_lock_acquired(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
struct trace_acquired_event acquired_event;
|
||||
u64 tmp; /* this is required for casting... */
|
||||
|
||||
tmp = raw_field_value(event, "lockdep_addr", data);
|
||||
memcpy(&acquired_event.addr, &tmp, sizeof(void *));
|
||||
acquired_event.name = (char *)raw_field_ptr(event, "name", data);
|
||||
|
||||
if (trace_handler->acquire_event)
|
||||
trace_handler->acquired_event(&acquired_event, event, cpu, timestamp, thread);
|
||||
if (trace_handler->acquired_event)
|
||||
return trace_handler->acquired_event(evsel, sample);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
process_lock_contended_event(void *data,
|
||||
struct event_format *event __used,
|
||||
int cpu __used,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used)
|
||||
static int perf_evsel__process_lock_contended(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
struct trace_contended_event contended_event;
|
||||
u64 tmp; /* this is required for casting... */
|
||||
|
||||
tmp = raw_field_value(event, "lockdep_addr", data);
|
||||
memcpy(&contended_event.addr, &tmp, sizeof(void *));
|
||||
contended_event.name = (char *)raw_field_ptr(event, "name", data);
|
||||
|
||||
if (trace_handler->acquire_event)
|
||||
trace_handler->contended_event(&contended_event, event, cpu, timestamp, thread);
|
||||
if (trace_handler->contended_event)
|
||||
return trace_handler->contended_event(evsel, sample);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
process_lock_release_event(void *data,
|
||||
struct event_format *event __used,
|
||||
int cpu __used,
|
||||
u64 timestamp __used,
|
||||
struct thread *thread __used)
|
||||
static int perf_evsel__process_lock_release(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample)
|
||||
{
|
||||
struct trace_release_event release_event;
|
||||
u64 tmp; /* this is required for casting... */
|
||||
|
||||
tmp = raw_field_value(event, "lockdep_addr", data);
|
||||
memcpy(&release_event.addr, &tmp, sizeof(void *));
|
||||
release_event.name = (char *)raw_field_ptr(event, "name", data);
|
||||
|
||||
if (trace_handler->acquire_event)
|
||||
trace_handler->release_event(&release_event, event, cpu, timestamp, thread);
|
||||
}
|
||||
|
||||
static void
|
||||
process_raw_event(void *data, int cpu, u64 timestamp, struct thread *thread)
|
||||
{
|
||||
struct event_format *event;
|
||||
int type;
|
||||
|
||||
type = trace_parse_common_type(session->pevent, data);
|
||||
event = pevent_find_event(session->pevent, type);
|
||||
|
||||
if (!strcmp(event->name, "lock_acquire"))
|
||||
process_lock_acquire_event(data, event, cpu, timestamp, thread);
|
||||
if (!strcmp(event->name, "lock_acquired"))
|
||||
process_lock_acquired_event(data, event, cpu, timestamp, thread);
|
||||
if (!strcmp(event->name, "lock_contended"))
|
||||
process_lock_contended_event(data, event, cpu, timestamp, thread);
|
||||
if (!strcmp(event->name, "lock_release"))
|
||||
process_lock_release_event(data, event, cpu, timestamp, thread);
|
||||
if (trace_handler->release_event)
|
||||
return trace_handler->release_event(evsel, sample);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void print_bad_events(int bad, int total)
|
||||
@ -836,20 +782,29 @@ static void dump_map(void)
|
||||
}
|
||||
}
|
||||
|
||||
static void dump_info(void)
|
||||
static int dump_info(void)
|
||||
{
|
||||
int rc = 0;
|
||||
|
||||
if (info_threads)
|
||||
dump_threads();
|
||||
else if (info_map)
|
||||
dump_map();
|
||||
else
|
||||
die("Unknown type of information\n");
|
||||
else {
|
||||
rc = -1;
|
||||
pr_err("Unknown type of information\n");
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int process_sample_event(struct perf_tool *tool __used,
|
||||
typedef int (*tracepoint_handler)(struct perf_evsel *evsel,
|
||||
struct perf_sample *sample);
|
||||
|
||||
static int process_sample_event(struct perf_tool *tool __maybe_unused,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample,
|
||||
struct perf_evsel *evsel __used,
|
||||
struct perf_evsel *evsel,
|
||||
struct machine *machine)
|
||||
{
|
||||
struct thread *thread = machine__findnew_thread(machine, sample->tid);
|
||||
@ -860,7 +815,10 @@ static int process_sample_event(struct perf_tool *tool __used,
|
||||
return -1;
|
||||
}
|
||||
|
||||
process_raw_event(sample->raw_data, sample->cpu, sample->time, thread);
|
||||
if (evsel->handler.func != NULL) {
|
||||
tracepoint_handler f = evsel->handler.func;
|
||||
return f(evsel, sample);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -871,11 +829,25 @@ static struct perf_tool eops = {
|
||||
.ordered_samples = true,
|
||||
};
|
||||
|
||||
static const struct perf_evsel_str_handler lock_tracepoints[] = {
|
||||
{ "lock:lock_acquire", perf_evsel__process_lock_acquire, }, /* CONFIG_LOCKDEP */
|
||||
{ "lock:lock_acquired", perf_evsel__process_lock_acquired, }, /* CONFIG_LOCKDEP, CONFIG_LOCK_STAT */
|
||||
{ "lock:lock_contended", perf_evsel__process_lock_contended, }, /* CONFIG_LOCKDEP, CONFIG_LOCK_STAT */
|
||||
{ "lock:lock_release", perf_evsel__process_lock_release, }, /* CONFIG_LOCKDEP */
|
||||
};
|
||||
|
||||
static int read_events(void)
|
||||
{
|
||||
session = perf_session__new(input_name, O_RDONLY, 0, false, &eops);
|
||||
if (!session)
|
||||
die("Initializing perf session failed\n");
|
||||
if (!session) {
|
||||
pr_err("Initializing perf session failed\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (perf_session__set_tracepoints_handlers(session, lock_tracepoints)) {
|
||||
pr_err("Initializing perf session tracepoint handlers failed\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
return perf_session__process_events(session, &eops);
|
||||
}
|
||||
@ -892,13 +864,18 @@ static void sort_result(void)
|
||||
}
|
||||
}
|
||||
|
||||
static void __cmd_report(void)
|
||||
static int __cmd_report(void)
|
||||
{
|
||||
setup_pager();
|
||||
select_key();
|
||||
read_events();
|
||||
|
||||
if ((select_key() != 0) ||
|
||||
(read_events() != 0))
|
||||
return -1;
|
||||
|
||||
sort_result();
|
||||
print_result();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const char * const report_usage[] = {
|
||||
@ -944,10 +921,6 @@ static const char *record_args[] = {
|
||||
"-f",
|
||||
"-m", "1024",
|
||||
"-c", "1",
|
||||
"-e", "lock:lock_acquire",
|
||||
"-e", "lock:lock_acquired",
|
||||
"-e", "lock:lock_contended",
|
||||
"-e", "lock:lock_release",
|
||||
};
|
||||
|
||||
static int __cmd_record(int argc, const char **argv)
|
||||
@ -955,15 +928,31 @@ static int __cmd_record(int argc, const char **argv)
|
||||
unsigned int rec_argc, i, j;
|
||||
const char **rec_argv;
|
||||
|
||||
rec_argc = ARRAY_SIZE(record_args) + argc - 1;
|
||||
rec_argv = calloc(rec_argc + 1, sizeof(char *));
|
||||
for (i = 0; i < ARRAY_SIZE(lock_tracepoints); i++) {
|
||||
if (!is_valid_tracepoint(lock_tracepoints[i].name)) {
|
||||
pr_err("tracepoint %s is not enabled. "
|
||||
"Are CONFIG_LOCKDEP and CONFIG_LOCK_STAT enabled?\n",
|
||||
lock_tracepoints[i].name);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
rec_argc = ARRAY_SIZE(record_args) + argc - 1;
|
||||
/* factor of 2 is for -e in front of each tracepoint */
|
||||
rec_argc += 2 * ARRAY_SIZE(lock_tracepoints);
|
||||
|
||||
rec_argv = calloc(rec_argc + 1, sizeof(char *));
|
||||
if (rec_argv == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(record_args); i++)
|
||||
rec_argv[i] = strdup(record_args[i]);
|
||||
|
||||
for (j = 0; j < ARRAY_SIZE(lock_tracepoints); j++) {
|
||||
rec_argv[i++] = "-e";
|
||||
rec_argv[i++] = strdup(lock_tracepoints[j].name);
|
||||
}
|
||||
|
||||
for (j = 1; j < (unsigned int)argc; j++, i++)
|
||||
rec_argv[i] = argv[j];
|
||||
|
||||
@ -972,9 +961,10 @@ static int __cmd_record(int argc, const char **argv)
|
||||
return cmd_record(i, rec_argv, NULL);
|
||||
}
|
||||
|
||||
int cmd_lock(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_lock(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
unsigned int i;
|
||||
int rc = 0;
|
||||
|
||||
symbol__init();
|
||||
for (i = 0; i < LOCKHASH_SIZE; i++)
|
||||
@ -1009,11 +999,13 @@ int cmd_lock(int argc, const char **argv, const char *prefix __used)
|
||||
/* recycling report_lock_ops */
|
||||
trace_handler = &report_lock_ops;
|
||||
setup_pager();
|
||||
read_events();
|
||||
dump_info();
|
||||
if (read_events() != 0)
|
||||
rc = -1;
|
||||
else
|
||||
rc = dump_info();
|
||||
} else {
|
||||
usage_with_options(lock_usage, lock_options);
|
||||
}
|
||||
|
||||
return 0;
|
||||
return rc;
|
||||
}
|
||||
|
@ -143,8 +143,8 @@ static int parse_probe_event_argv(int argc, const char **argv)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int opt_add_probe_event(const struct option *opt __used,
|
||||
const char *str, int unset __used)
|
||||
static int opt_add_probe_event(const struct option *opt __maybe_unused,
|
||||
const char *str, int unset __maybe_unused)
|
||||
{
|
||||
if (str) {
|
||||
params.mod_events = true;
|
||||
@ -153,8 +153,8 @@ static int opt_add_probe_event(const struct option *opt __used,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int opt_del_probe_event(const struct option *opt __used,
|
||||
const char *str, int unset __used)
|
||||
static int opt_del_probe_event(const struct option *opt __maybe_unused,
|
||||
const char *str, int unset __maybe_unused)
|
||||
{
|
||||
if (str) {
|
||||
params.mod_events = true;
|
||||
@ -166,7 +166,7 @@ static int opt_del_probe_event(const struct option *opt __used,
|
||||
}
|
||||
|
||||
static int opt_set_target(const struct option *opt, const char *str,
|
||||
int unset __used)
|
||||
int unset __maybe_unused)
|
||||
{
|
||||
int ret = -ENOENT;
|
||||
|
||||
@ -188,8 +188,8 @@ static int opt_set_target(const struct option *opt, const char *str,
|
||||
}
|
||||
|
||||
#ifdef DWARF_SUPPORT
|
||||
static int opt_show_lines(const struct option *opt __used,
|
||||
const char *str, int unset __used)
|
||||
static int opt_show_lines(const struct option *opt __maybe_unused,
|
||||
const char *str, int unset __maybe_unused)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
@ -209,8 +209,8 @@ static int opt_show_lines(const struct option *opt __used,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int opt_show_vars(const struct option *opt __used,
|
||||
const char *str, int unset __used)
|
||||
static int opt_show_vars(const struct option *opt __maybe_unused,
|
||||
const char *str, int unset __maybe_unused)
|
||||
{
|
||||
struct perf_probe_event *pev = ¶ms.events[params.nevents];
|
||||
int ret;
|
||||
@ -229,8 +229,8 @@ static int opt_show_vars(const struct option *opt __used,
|
||||
}
|
||||
#endif
|
||||
|
||||
static int opt_set_filter(const struct option *opt __used,
|
||||
const char *str, int unset __used)
|
||||
static int opt_set_filter(const struct option *opt __maybe_unused,
|
||||
const char *str, int unset __maybe_unused)
|
||||
{
|
||||
const char *err;
|
||||
|
||||
@ -327,7 +327,7 @@ static const struct option options[] = {
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
int cmd_probe(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_probe(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
@ -31,6 +31,15 @@
|
||||
#include <sched.h>
|
||||
#include <sys/mman.h>
|
||||
|
||||
#define CALLCHAIN_HELP "do call-graph (stack chain/backtrace) recording: "
|
||||
|
||||
#ifdef NO_LIBUNWIND_SUPPORT
|
||||
static char callchain_help[] = CALLCHAIN_HELP "[fp]";
|
||||
#else
|
||||
static unsigned long default_stack_dump_size = 8192;
|
||||
static char callchain_help[] = CALLCHAIN_HELP "[fp] dwarf";
|
||||
#endif
|
||||
|
||||
enum write_mode_t {
|
||||
WRITE_FORCE,
|
||||
WRITE_APPEND
|
||||
@ -62,32 +71,38 @@ static void advance_output(struct perf_record *rec, size_t size)
|
||||
rec->bytes_written += size;
|
||||
}
|
||||
|
||||
static void write_output(struct perf_record *rec, void *buf, size_t size)
|
||||
static int write_output(struct perf_record *rec, void *buf, size_t size)
|
||||
{
|
||||
while (size) {
|
||||
int ret = write(rec->output, buf, size);
|
||||
|
||||
if (ret < 0)
|
||||
die("failed to write");
|
||||
if (ret < 0) {
|
||||
pr_err("failed to write\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
size -= ret;
|
||||
buf += ret;
|
||||
|
||||
rec->bytes_written += ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int process_synthesized_event(struct perf_tool *tool,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample __used,
|
||||
struct machine *machine __used)
|
||||
struct perf_sample *sample __maybe_unused,
|
||||
struct machine *machine __maybe_unused)
|
||||
{
|
||||
struct perf_record *rec = container_of(tool, struct perf_record, tool);
|
||||
write_output(rec, event, event->header.size);
|
||||
if (write_output(rec, event, event->header.size) < 0)
|
||||
return -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void perf_record__mmap_read(struct perf_record *rec,
|
||||
static int perf_record__mmap_read(struct perf_record *rec,
|
||||
struct perf_mmap *md)
|
||||
{
|
||||
unsigned int head = perf_mmap__read_head(md);
|
||||
@ -95,9 +110,10 @@ static void perf_record__mmap_read(struct perf_record *rec,
|
||||
unsigned char *data = md->base + rec->page_size;
|
||||
unsigned long size;
|
||||
void *buf;
|
||||
int rc = 0;
|
||||
|
||||
if (old == head)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
rec->samples++;
|
||||
|
||||
@ -108,17 +124,26 @@ static void perf_record__mmap_read(struct perf_record *rec,
|
||||
size = md->mask + 1 - (old & md->mask);
|
||||
old += size;
|
||||
|
||||
write_output(rec, buf, size);
|
||||
if (write_output(rec, buf, size) < 0) {
|
||||
rc = -1;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
buf = &data[old & md->mask];
|
||||
size = head - old;
|
||||
old += size;
|
||||
|
||||
write_output(rec, buf, size);
|
||||
if (write_output(rec, buf, size) < 0) {
|
||||
rc = -1;
|
||||
goto out;
|
||||
}
|
||||
|
||||
md->prev = old;
|
||||
perf_mmap__write_tail(md, old);
|
||||
|
||||
out:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static volatile int done = 0;
|
||||
@ -134,7 +159,7 @@ static void sig_handler(int sig)
|
||||
signr = sig;
|
||||
}
|
||||
|
||||
static void perf_record__sig_exit(int exit_status __used, void *arg)
|
||||
static void perf_record__sig_exit(int exit_status __maybe_unused, void *arg)
|
||||
{
|
||||
struct perf_record *rec = arg;
|
||||
int status;
|
||||
@ -163,31 +188,32 @@ static bool perf_evlist__equal(struct perf_evlist *evlist,
|
||||
if (evlist->nr_entries != other->nr_entries)
|
||||
return false;
|
||||
|
||||
pair = list_entry(other->entries.next, struct perf_evsel, node);
|
||||
pair = perf_evlist__first(other);
|
||||
|
||||
list_for_each_entry(pos, &evlist->entries, node) {
|
||||
if (memcmp(&pos->attr, &pair->attr, sizeof(pos->attr) != 0))
|
||||
return false;
|
||||
pair = list_entry(pair->node.next, struct perf_evsel, node);
|
||||
pair = perf_evsel__next(pair);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void perf_record__open(struct perf_record *rec)
|
||||
static int perf_record__open(struct perf_record *rec)
|
||||
{
|
||||
struct perf_evsel *pos, *first;
|
||||
struct perf_evsel *pos;
|
||||
struct perf_evlist *evlist = rec->evlist;
|
||||
struct perf_session *session = rec->session;
|
||||
struct perf_record_opts *opts = &rec->opts;
|
||||
|
||||
first = list_entry(evlist->entries.next, struct perf_evsel, node);
|
||||
int rc = 0;
|
||||
|
||||
perf_evlist__config_attrs(evlist, opts);
|
||||
|
||||
if (opts->group)
|
||||
perf_evlist__set_leader(evlist);
|
||||
|
||||
list_for_each_entry(pos, &evlist->entries, node) {
|
||||
struct perf_event_attr *attr = &pos->attr;
|
||||
struct xyarray *group_fd = NULL;
|
||||
/*
|
||||
* Check if parse_single_tracepoint_event has already asked for
|
||||
* PERF_SAMPLE_TIME.
|
||||
@ -202,24 +228,24 @@ static void perf_record__open(struct perf_record *rec)
|
||||
*/
|
||||
bool time_needed = attr->sample_type & PERF_SAMPLE_TIME;
|
||||
|
||||
if (opts->group && pos != first)
|
||||
group_fd = first->fd;
|
||||
fallback_missing_features:
|
||||
if (opts->exclude_guest_missing)
|
||||
attr->exclude_guest = attr->exclude_host = 0;
|
||||
retry_sample_id:
|
||||
attr->sample_id_all = opts->sample_id_all_missing ? 0 : 1;
|
||||
try_again:
|
||||
if (perf_evsel__open(pos, evlist->cpus, evlist->threads,
|
||||
opts->group, group_fd) < 0) {
|
||||
if (perf_evsel__open(pos, evlist->cpus, evlist->threads) < 0) {
|
||||
int err = errno;
|
||||
|
||||
if (err == EPERM || err == EACCES) {
|
||||
ui__error_paranoid();
|
||||
exit(EXIT_FAILURE);
|
||||
rc = -err;
|
||||
goto out;
|
||||
} else if (err == ENODEV && opts->target.cpu_list) {
|
||||
die("No such device - did you specify"
|
||||
" an out-of-range profile CPU?\n");
|
||||
pr_err("No such device - did you specify"
|
||||
" an out-of-range profile CPU?\n");
|
||||
rc = -err;
|
||||
goto out;
|
||||
} else if (err == EINVAL) {
|
||||
if (!opts->exclude_guest_missing &&
|
||||
(attr->exclude_guest || attr->exclude_host)) {
|
||||
@ -266,42 +292,57 @@ try_again:
|
||||
if (err == ENOENT) {
|
||||
ui__error("The %s event is not supported.\n",
|
||||
perf_evsel__name(pos));
|
||||
exit(EXIT_FAILURE);
|
||||
rc = -err;
|
||||
goto out;
|
||||
}
|
||||
|
||||
printf("\n");
|
||||
error("sys_perf_event_open() syscall returned with %d (%s). /bin/dmesg may provide additional information.\n",
|
||||
err, strerror(err));
|
||||
error("sys_perf_event_open() syscall returned with %d "
|
||||
"(%s) for event %s. /bin/dmesg may provide "
|
||||
"additional information.\n",
|
||||
err, strerror(err), perf_evsel__name(pos));
|
||||
|
||||
#if defined(__i386__) || defined(__x86_64__)
|
||||
if (attr->type == PERF_TYPE_HARDWARE && err == EOPNOTSUPP)
|
||||
die("No hardware sampling interrupt available."
|
||||
" No APIC? If so then you can boot the kernel"
|
||||
" with the \"lapic\" boot parameter to"
|
||||
" force-enable it.\n");
|
||||
if (attr->type == PERF_TYPE_HARDWARE &&
|
||||
err == EOPNOTSUPP) {
|
||||
pr_err("No hardware sampling interrupt available."
|
||||
" No APIC? If so then you can boot the kernel"
|
||||
" with the \"lapic\" boot parameter to"
|
||||
" force-enable it.\n");
|
||||
rc = -err;
|
||||
goto out;
|
||||
}
|
||||
#endif
|
||||
|
||||
die("No CONFIG_PERF_EVENTS=y kernel support configured?\n");
|
||||
pr_err("No CONFIG_PERF_EVENTS=y kernel support configured?\n");
|
||||
rc = -err;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
if (perf_evlist__set_filters(evlist)) {
|
||||
if (perf_evlist__apply_filters(evlist)) {
|
||||
error("failed to set filter with %d (%s)\n", errno,
|
||||
strerror(errno));
|
||||
exit(-1);
|
||||
rc = -1;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (perf_evlist__mmap(evlist, opts->mmap_pages, false) < 0) {
|
||||
if (errno == EPERM)
|
||||
die("Permission error mapping pages.\n"
|
||||
"Consider increasing "
|
||||
"/proc/sys/kernel/perf_event_mlock_kb,\n"
|
||||
"or try again with a smaller value of -m/--mmap_pages.\n"
|
||||
"(current value: %d)\n", opts->mmap_pages);
|
||||
else if (!is_power_of_2(opts->mmap_pages))
|
||||
die("--mmap_pages/-m value must be a power of two.");
|
||||
|
||||
die("failed to mmap with %d (%s)\n", errno, strerror(errno));
|
||||
if (errno == EPERM) {
|
||||
pr_err("Permission error mapping pages.\n"
|
||||
"Consider increasing "
|
||||
"/proc/sys/kernel/perf_event_mlock_kb,\n"
|
||||
"or try again with a smaller value of -m/--mmap_pages.\n"
|
||||
"(current value: %d)\n", opts->mmap_pages);
|
||||
rc = -errno;
|
||||
} else if (!is_power_of_2(opts->mmap_pages)) {
|
||||
pr_err("--mmap_pages/-m value must be a power of two.");
|
||||
rc = -EINVAL;
|
||||
} else {
|
||||
pr_err("failed to mmap with %d (%s)\n", errno, strerror(errno));
|
||||
rc = -errno;
|
||||
}
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (rec->file_new)
|
||||
@ -309,11 +350,14 @@ try_again:
|
||||
else {
|
||||
if (!perf_evlist__equal(session->evlist, evlist)) {
|
||||
fprintf(stderr, "incompatible append\n");
|
||||
exit(-1);
|
||||
rc = -1;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
perf_session__set_id_hdr_size(session);
|
||||
out:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int process_buildids(struct perf_record *rec)
|
||||
@ -329,10 +373,13 @@ static int process_buildids(struct perf_record *rec)
|
||||
size, &build_id__mark_dso_hit_ops);
|
||||
}
|
||||
|
||||
static void perf_record__exit(int status __used, void *arg)
|
||||
static void perf_record__exit(int status, void *arg)
|
||||
{
|
||||
struct perf_record *rec = arg;
|
||||
|
||||
if (status != 0)
|
||||
return;
|
||||
|
||||
if (!rec->opts.pipe_output) {
|
||||
rec->session->header.data_size += rec->bytes_written;
|
||||
|
||||
@ -387,17 +434,26 @@ static struct perf_event_header finished_round_event = {
|
||||
.type = PERF_RECORD_FINISHED_ROUND,
|
||||
};
|
||||
|
||||
static void perf_record__mmap_read_all(struct perf_record *rec)
|
||||
static int perf_record__mmap_read_all(struct perf_record *rec)
|
||||
{
|
||||
int i;
|
||||
int rc = 0;
|
||||
|
||||
for (i = 0; i < rec->evlist->nr_mmaps; i++) {
|
||||
if (rec->evlist->mmap[i].base)
|
||||
perf_record__mmap_read(rec, &rec->evlist->mmap[i]);
|
||||
if (rec->evlist->mmap[i].base) {
|
||||
if (perf_record__mmap_read(rec, &rec->evlist->mmap[i]) != 0) {
|
||||
rc = -1;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (perf_header__has_feat(&rec->session->header, HEADER_TRACING_DATA))
|
||||
write_output(rec, &finished_round_event, sizeof(finished_round_event));
|
||||
rc = write_output(rec, &finished_round_event,
|
||||
sizeof(finished_round_event));
|
||||
|
||||
out:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
|
||||
@ -457,7 +513,7 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
|
||||
output = open(output_name, flags, S_IRUSR | S_IWUSR);
|
||||
if (output < 0) {
|
||||
perror("failed to create output file");
|
||||
exit(-1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
rec->output = output;
|
||||
@ -497,7 +553,10 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
|
||||
}
|
||||
}
|
||||
|
||||
perf_record__open(rec);
|
||||
if (perf_record__open(rec) != 0) {
|
||||
err = -1;
|
||||
goto out_delete_session;
|
||||
}
|
||||
|
||||
/*
|
||||
* perf_session__delete(session) will be called at perf_record__exit()
|
||||
@ -507,19 +566,20 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
|
||||
if (opts->pipe_output) {
|
||||
err = perf_header__write_pipe(output);
|
||||
if (err < 0)
|
||||
return err;
|
||||
goto out_delete_session;
|
||||
} else if (rec->file_new) {
|
||||
err = perf_session__write_header(session, evsel_list,
|
||||
output, false);
|
||||
if (err < 0)
|
||||
return err;
|
||||
goto out_delete_session;
|
||||
}
|
||||
|
||||
if (!rec->no_buildid
|
||||
&& !perf_header__has_feat(&session->header, HEADER_BUILD_ID)) {
|
||||
pr_err("Couldn't generate buildids. "
|
||||
"Use --no-buildid to profile anyway.\n");
|
||||
return -1;
|
||||
err = -1;
|
||||
goto out_delete_session;
|
||||
}
|
||||
|
||||
rec->post_processing_offset = lseek(output, 0, SEEK_CUR);
|
||||
@ -527,7 +587,8 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
|
||||
machine = perf_session__find_host_machine(session);
|
||||
if (!machine) {
|
||||
pr_err("Couldn't find native kernel information.\n");
|
||||
return -1;
|
||||
err = -1;
|
||||
goto out_delete_session;
|
||||
}
|
||||
|
||||
if (opts->pipe_output) {
|
||||
@ -535,14 +596,14 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
|
||||
process_synthesized_event);
|
||||
if (err < 0) {
|
||||
pr_err("Couldn't synthesize attrs.\n");
|
||||
return err;
|
||||
goto out_delete_session;
|
||||
}
|
||||
|
||||
err = perf_event__synthesize_event_types(tool, process_synthesized_event,
|
||||
machine);
|
||||
if (err < 0) {
|
||||
pr_err("Couldn't synthesize event_types.\n");
|
||||
return err;
|
||||
goto out_delete_session;
|
||||
}
|
||||
|
||||
if (have_tracepoints(&evsel_list->entries)) {
|
||||
@ -558,7 +619,7 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
|
||||
process_synthesized_event);
|
||||
if (err <= 0) {
|
||||
pr_err("Couldn't record tracing data.\n");
|
||||
return err;
|
||||
goto out_delete_session;
|
||||
}
|
||||
advance_output(rec, err);
|
||||
}
|
||||
@ -586,20 +647,24 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
|
||||
perf_event__synthesize_guest_os);
|
||||
|
||||
if (!opts->target.system_wide)
|
||||
perf_event__synthesize_thread_map(tool, evsel_list->threads,
|
||||
err = perf_event__synthesize_thread_map(tool, evsel_list->threads,
|
||||
process_synthesized_event,
|
||||
machine);
|
||||
else
|
||||
perf_event__synthesize_threads(tool, process_synthesized_event,
|
||||
err = perf_event__synthesize_threads(tool, process_synthesized_event,
|
||||
machine);
|
||||
|
||||
if (err != 0)
|
||||
goto out_delete_session;
|
||||
|
||||
if (rec->realtime_prio) {
|
||||
struct sched_param param;
|
||||
|
||||
param.sched_priority = rec->realtime_prio;
|
||||
if (sched_setscheduler(0, SCHED_FIFO, ¶m)) {
|
||||
pr_err("Could not set realtime priority.\n");
|
||||
exit(-1);
|
||||
err = -1;
|
||||
goto out_delete_session;
|
||||
}
|
||||
}
|
||||
|
||||
@ -614,7 +679,10 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
|
||||
for (;;) {
|
||||
int hits = rec->samples;
|
||||
|
||||
perf_record__mmap_read_all(rec);
|
||||
if (perf_record__mmap_read_all(rec) < 0) {
|
||||
err = -1;
|
||||
goto out_delete_session;
|
||||
}
|
||||
|
||||
if (hits == rec->samples) {
|
||||
if (done)
|
||||
@ -732,6 +800,106 @@ error:
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifndef NO_LIBUNWIND_SUPPORT
|
||||
static int get_stack_size(char *str, unsigned long *_size)
|
||||
{
|
||||
char *endptr;
|
||||
unsigned long size;
|
||||
unsigned long max_size = round_down(USHRT_MAX, sizeof(u64));
|
||||
|
||||
size = strtoul(str, &endptr, 0);
|
||||
|
||||
do {
|
||||
if (*endptr)
|
||||
break;
|
||||
|
||||
size = round_up(size, sizeof(u64));
|
||||
if (!size || size > max_size)
|
||||
break;
|
||||
|
||||
*_size = size;
|
||||
return 0;
|
||||
|
||||
} while (0);
|
||||
|
||||
pr_err("callchain: Incorrect stack dump size (max %ld): %s\n",
|
||||
max_size, str);
|
||||
return -1;
|
||||
}
|
||||
#endif /* !NO_LIBUNWIND_SUPPORT */
|
||||
|
||||
static int
|
||||
parse_callchain_opt(const struct option *opt __maybe_unused, const char *arg,
|
||||
int unset)
|
||||
{
|
||||
struct perf_record *rec = (struct perf_record *)opt->value;
|
||||
char *tok, *name, *saveptr = NULL;
|
||||
char *buf;
|
||||
int ret = -1;
|
||||
|
||||
/* --no-call-graph */
|
||||
if (unset)
|
||||
return 0;
|
||||
|
||||
/* We specified default option if none is provided. */
|
||||
BUG_ON(!arg);
|
||||
|
||||
/* We need buffer that we know we can write to. */
|
||||
buf = malloc(strlen(arg) + 1);
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
|
||||
strcpy(buf, arg);
|
||||
|
||||
tok = strtok_r((char *)buf, ",", &saveptr);
|
||||
name = tok ? : (char *)buf;
|
||||
|
||||
do {
|
||||
/* Framepointer style */
|
||||
if (!strncmp(name, "fp", sizeof("fp"))) {
|
||||
if (!strtok_r(NULL, ",", &saveptr)) {
|
||||
rec->opts.call_graph = CALLCHAIN_FP;
|
||||
ret = 0;
|
||||
} else
|
||||
pr_err("callchain: No more arguments "
|
||||
"needed for -g fp\n");
|
||||
break;
|
||||
|
||||
#ifndef NO_LIBUNWIND_SUPPORT
|
||||
/* Dwarf style */
|
||||
} else if (!strncmp(name, "dwarf", sizeof("dwarf"))) {
|
||||
ret = 0;
|
||||
rec->opts.call_graph = CALLCHAIN_DWARF;
|
||||
rec->opts.stack_dump_size = default_stack_dump_size;
|
||||
|
||||
tok = strtok_r(NULL, ",", &saveptr);
|
||||
if (tok) {
|
||||
unsigned long size = 0;
|
||||
|
||||
ret = get_stack_size(tok, &size);
|
||||
rec->opts.stack_dump_size = size;
|
||||
}
|
||||
|
||||
if (!ret)
|
||||
pr_debug("callchain: stack dump size %d\n",
|
||||
rec->opts.stack_dump_size);
|
||||
#endif /* !NO_LIBUNWIND_SUPPORT */
|
||||
} else {
|
||||
pr_err("callchain: Unknown -g option "
|
||||
"value: %s\n", arg);
|
||||
break;
|
||||
}
|
||||
|
||||
} while (0);
|
||||
|
||||
free(buf);
|
||||
|
||||
if (!ret)
|
||||
pr_debug("callchain: type %d\n", rec->opts.call_graph);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const char * const record_usage[] = {
|
||||
"perf record [<options>] [<command>]",
|
||||
"perf record [<options>] -- <command> [<options>]",
|
||||
@ -803,8 +971,9 @@ const struct option record_options[] = {
|
||||
"number of mmap data pages"),
|
||||
OPT_BOOLEAN(0, "group", &record.opts.group,
|
||||
"put the counters into a counter group"),
|
||||
OPT_BOOLEAN('g', "call-graph", &record.opts.call_graph,
|
||||
"do call-graph (stack chain/backtrace) recording"),
|
||||
OPT_CALLBACK_DEFAULT('g', "call-graph", &record, "mode[,dump_size]",
|
||||
callchain_help, &parse_callchain_opt,
|
||||
"fp"),
|
||||
OPT_INCR('v', "verbose", &verbose,
|
||||
"be more verbose (show counter open errors, etc)"),
|
||||
OPT_BOOLEAN('q', "quiet", &quiet, "don't print any message"),
|
||||
@ -836,7 +1005,7 @@ const struct option record_options[] = {
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
int cmd_record(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
int err = -ENOMEM;
|
||||
struct perf_evsel *pos;
|
||||
|
@ -69,8 +69,8 @@ static int perf_report__add_branch_hist_entry(struct perf_tool *tool,
|
||||
|
||||
if ((sort__has_parent || symbol_conf.use_callchain)
|
||||
&& sample->callchain) {
|
||||
err = machine__resolve_callchain(machine, al->thread,
|
||||
sample->callchain, &parent);
|
||||
err = machine__resolve_callchain(machine, evsel, al->thread,
|
||||
sample, &parent);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
@ -93,7 +93,7 @@ static int perf_report__add_branch_hist_entry(struct perf_tool *tool,
|
||||
struct annotation *notes;
|
||||
err = -ENOMEM;
|
||||
bx = he->branch_info;
|
||||
if (bx->from.sym && use_browser > 0) {
|
||||
if (bx->from.sym && use_browser == 1 && sort__has_sym) {
|
||||
notes = symbol__annotation(bx->from.sym);
|
||||
if (!notes->src
|
||||
&& symbol__alloc_hist(bx->from.sym) < 0)
|
||||
@ -107,7 +107,7 @@ static int perf_report__add_branch_hist_entry(struct perf_tool *tool,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (bx->to.sym && use_browser > 0) {
|
||||
if (bx->to.sym && use_browser == 1 && sort__has_sym) {
|
||||
notes = symbol__annotation(bx->to.sym);
|
||||
if (!notes->src
|
||||
&& symbol__alloc_hist(bx->to.sym) < 0)
|
||||
@ -140,8 +140,8 @@ static int perf_evsel__add_hist_entry(struct perf_evsel *evsel,
|
||||
struct hist_entry *he;
|
||||
|
||||
if ((sort__has_parent || symbol_conf.use_callchain) && sample->callchain) {
|
||||
err = machine__resolve_callchain(machine, al->thread,
|
||||
sample->callchain, &parent);
|
||||
err = machine__resolve_callchain(machine, evsel, al->thread,
|
||||
sample, &parent);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
@ -162,7 +162,7 @@ static int perf_evsel__add_hist_entry(struct perf_evsel *evsel,
|
||||
* so we don't allocated the extra space needed because the stdio
|
||||
* code will not use it.
|
||||
*/
|
||||
if (he->ms.sym != NULL && use_browser > 0) {
|
||||
if (he->ms.sym != NULL && use_browser == 1 && sort__has_sym) {
|
||||
struct annotation *notes = symbol__annotation(he->ms.sym);
|
||||
|
||||
assert(evsel != NULL);
|
||||
@ -223,9 +223,9 @@ static int process_sample_event(struct perf_tool *tool,
|
||||
|
||||
static int process_read_event(struct perf_tool *tool,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample __used,
|
||||
struct perf_sample *sample __maybe_unused,
|
||||
struct perf_evsel *evsel,
|
||||
struct machine *machine __used)
|
||||
struct machine *machine __maybe_unused)
|
||||
{
|
||||
struct perf_report *rep = container_of(tool, struct perf_report, tool);
|
||||
|
||||
@ -287,7 +287,7 @@ static int perf_report__setup_sample_type(struct perf_report *rep)
|
||||
|
||||
extern volatile int session_done;
|
||||
|
||||
static void sig_handler(int sig __used)
|
||||
static void sig_handler(int sig __maybe_unused)
|
||||
{
|
||||
session_done = 1;
|
||||
}
|
||||
@ -397,17 +397,17 @@ static int __cmd_report(struct perf_report *rep)
|
||||
desc);
|
||||
}
|
||||
|
||||
if (dump_trace) {
|
||||
perf_session__fprintf_nr_events(session, stdout);
|
||||
goto out_delete;
|
||||
}
|
||||
|
||||
if (verbose > 3)
|
||||
perf_session__fprintf(session, stdout);
|
||||
|
||||
if (verbose > 2)
|
||||
perf_session__fprintf_dsos(session, stdout);
|
||||
|
||||
if (dump_trace) {
|
||||
perf_session__fprintf_nr_events(session, stdout);
|
||||
goto out_delete;
|
||||
}
|
||||
|
||||
nr_samples = 0;
|
||||
list_for_each_entry(pos, &session->evlist->entries, node) {
|
||||
struct hists *hists = &pos->hists;
|
||||
@ -533,13 +533,14 @@ setup:
|
||||
}
|
||||
|
||||
static int
|
||||
parse_branch_mode(const struct option *opt __used, const char *str __used, int unset)
|
||||
parse_branch_mode(const struct option *opt __maybe_unused,
|
||||
const char *str __maybe_unused, int unset)
|
||||
{
|
||||
sort__branch_mode = !unset;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cmd_report(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_report(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
struct perf_session *session;
|
||||
struct stat st;
|
||||
@ -638,6 +639,8 @@ int cmd_report(int argc, const char **argv, const char *prefix __used)
|
||||
"Show a column with the sum of periods"),
|
||||
OPT_CALLBACK_NOOPT('b', "branch-stack", &sort__branch_mode, "",
|
||||
"use branch records for histogram filling", parse_branch_mode),
|
||||
OPT_STRING(0, "objdump", &objdump_path, "path",
|
||||
"objdump binary to use for disassembly and annotations"),
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
@ -686,15 +689,19 @@ int cmd_report(int argc, const char **argv, const char *prefix __used)
|
||||
|
||||
if (strcmp(report.input_name, "-") != 0)
|
||||
setup_browser(true);
|
||||
else
|
||||
else {
|
||||
use_browser = 0;
|
||||
perf_hpp__init(false, false);
|
||||
}
|
||||
|
||||
setup_sorting(report_usage, options);
|
||||
|
||||
/*
|
||||
* Only in the newt browser we are doing integrated annotation,
|
||||
* so don't allocate extra space that won't be used in the stdio
|
||||
* implementation.
|
||||
*/
|
||||
if (use_browser > 0) {
|
||||
if (use_browser == 1 && sort__has_sym) {
|
||||
symbol_conf.priv_size = sizeof(struct annotation);
|
||||
report.annotate_init = symbol__annotate_init;
|
||||
/*
|
||||
@ -717,8 +724,6 @@ int cmd_report(int argc, const char **argv, const char *prefix __used)
|
||||
if (symbol__init() < 0)
|
||||
goto error;
|
||||
|
||||
setup_sorting(report_usage, options);
|
||||
|
||||
if (parent_pattern != default_parent_pattern) {
|
||||
if (sort_dimension__add("parent") < 0)
|
||||
goto error;
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -14,6 +14,7 @@
|
||||
#include "util/util.h"
|
||||
#include "util/evlist.h"
|
||||
#include "util/evsel.h"
|
||||
#include "util/sort.h"
|
||||
#include <linux/bitmap.h>
|
||||
|
||||
static char const *script_name;
|
||||
@ -28,11 +29,6 @@ static bool system_wide;
|
||||
static const char *cpu_list;
|
||||
static DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
|
||||
|
||||
struct perf_script {
|
||||
struct perf_tool tool;
|
||||
struct perf_session *session;
|
||||
};
|
||||
|
||||
enum perf_output_field {
|
||||
PERF_OUTPUT_COMM = 1U << 0,
|
||||
PERF_OUTPUT_TID = 1U << 1,
|
||||
@ -262,14 +258,11 @@ static int perf_session__check_output_opt(struct perf_session *session)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void print_sample_start(struct pevent *pevent,
|
||||
struct perf_sample *sample,
|
||||
static void print_sample_start(struct perf_sample *sample,
|
||||
struct thread *thread,
|
||||
struct perf_evsel *evsel)
|
||||
{
|
||||
int type;
|
||||
struct perf_event_attr *attr = &evsel->attr;
|
||||
struct event_format *event;
|
||||
const char *evname = NULL;
|
||||
unsigned long secs;
|
||||
unsigned long usecs;
|
||||
@ -307,20 +300,7 @@ static void print_sample_start(struct pevent *pevent,
|
||||
}
|
||||
|
||||
if (PRINT_FIELD(EVNAME)) {
|
||||
if (attr->type == PERF_TYPE_TRACEPOINT) {
|
||||
/*
|
||||
* XXX Do we really need this here?
|
||||
* perf_evlist__set_tracepoint_names should have done
|
||||
* this already
|
||||
*/
|
||||
type = trace_parse_common_type(pevent,
|
||||
sample->raw_data);
|
||||
event = pevent_find_event(pevent, type);
|
||||
if (event)
|
||||
evname = event->name;
|
||||
} else
|
||||
evname = perf_evsel__name(evsel);
|
||||
|
||||
evname = perf_evsel__name(evsel);
|
||||
printf("%s: ", evname ? evname : "[unknown]");
|
||||
}
|
||||
}
|
||||
@ -401,7 +381,7 @@ static void print_sample_bts(union perf_event *event,
|
||||
printf(" ");
|
||||
else
|
||||
printf("\n");
|
||||
perf_event__print_ip(event, sample, machine,
|
||||
perf_evsel__print_ip(evsel, event, sample, machine,
|
||||
PRINT_FIELD(SYM), PRINT_FIELD(DSO),
|
||||
PRINT_FIELD(SYMOFFSET));
|
||||
}
|
||||
@ -415,19 +395,17 @@ static void print_sample_bts(union perf_event *event,
|
||||
printf("\n");
|
||||
}
|
||||
|
||||
static void process_event(union perf_event *event __unused,
|
||||
struct pevent *pevent,
|
||||
struct perf_sample *sample,
|
||||
struct perf_evsel *evsel,
|
||||
struct machine *machine,
|
||||
struct thread *thread)
|
||||
static void process_event(union perf_event *event, struct perf_sample *sample,
|
||||
struct perf_evsel *evsel, struct machine *machine,
|
||||
struct addr_location *al)
|
||||
{
|
||||
struct perf_event_attr *attr = &evsel->attr;
|
||||
struct thread *thread = al->thread;
|
||||
|
||||
if (output[attr->type].fields == 0)
|
||||
return;
|
||||
|
||||
print_sample_start(pevent, sample, thread, evsel);
|
||||
print_sample_start(sample, thread, evsel);
|
||||
|
||||
if (is_bts_event(attr)) {
|
||||
print_sample_bts(event, sample, evsel, machine, thread);
|
||||
@ -435,9 +413,8 @@ static void process_event(union perf_event *event __unused,
|
||||
}
|
||||
|
||||
if (PRINT_FIELD(TRACE))
|
||||
print_trace_event(pevent, sample->cpu, sample->raw_data,
|
||||
sample->raw_size);
|
||||
|
||||
event_format__print(evsel->tp_format, sample->cpu,
|
||||
sample->raw_data, sample->raw_size);
|
||||
if (PRINT_FIELD(ADDR))
|
||||
print_sample_addr(event, sample, machine, thread, attr);
|
||||
|
||||
@ -446,7 +423,7 @@ static void process_event(union perf_event *event __unused,
|
||||
printf(" ");
|
||||
else
|
||||
printf("\n");
|
||||
perf_event__print_ip(event, sample, machine,
|
||||
perf_evsel__print_ip(evsel, event, sample, machine,
|
||||
PRINT_FIELD(SYM), PRINT_FIELD(DSO),
|
||||
PRINT_FIELD(SYMOFFSET));
|
||||
}
|
||||
@ -454,9 +431,9 @@ static void process_event(union perf_event *event __unused,
|
||||
printf("\n");
|
||||
}
|
||||
|
||||
static int default_start_script(const char *script __unused,
|
||||
int argc __unused,
|
||||
const char **argv __unused)
|
||||
static int default_start_script(const char *script __maybe_unused,
|
||||
int argc __maybe_unused,
|
||||
const char **argv __maybe_unused)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
@ -466,8 +443,8 @@ static int default_stop_script(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int default_generate_script(struct pevent *pevent __unused,
|
||||
const char *outfile __unused)
|
||||
static int default_generate_script(struct pevent *pevent __maybe_unused,
|
||||
const char *outfile __maybe_unused)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
@ -498,14 +475,13 @@ static int cleanup_scripting(void)
|
||||
|
||||
static const char *input_name;
|
||||
|
||||
static int process_sample_event(struct perf_tool *tool __used,
|
||||
static int process_sample_event(struct perf_tool *tool __maybe_unused,
|
||||
union perf_event *event,
|
||||
struct perf_sample *sample,
|
||||
struct perf_evsel *evsel,
|
||||
struct machine *machine)
|
||||
{
|
||||
struct addr_location al;
|
||||
struct perf_script *scr = container_of(tool, struct perf_script, tool);
|
||||
struct thread *thread = machine__findnew_thread(machine, event->ip.tid);
|
||||
|
||||
if (thread == NULL) {
|
||||
@ -537,32 +513,29 @@ static int process_sample_event(struct perf_tool *tool __used,
|
||||
if (cpu_list && !test_bit(sample->cpu, cpu_bitmap))
|
||||
return 0;
|
||||
|
||||
scripting_ops->process_event(event, scr->session->pevent,
|
||||
sample, evsel, machine, thread);
|
||||
scripting_ops->process_event(event, sample, evsel, machine, &al);
|
||||
|
||||
evsel->hists.stats.total_period += sample->period;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct perf_script perf_script = {
|
||||
.tool = {
|
||||
.sample = process_sample_event,
|
||||
.mmap = perf_event__process_mmap,
|
||||
.comm = perf_event__process_comm,
|
||||
.exit = perf_event__process_task,
|
||||
.fork = perf_event__process_task,
|
||||
.attr = perf_event__process_attr,
|
||||
.event_type = perf_event__process_event_type,
|
||||
.tracing_data = perf_event__process_tracing_data,
|
||||
.build_id = perf_event__process_build_id,
|
||||
.ordered_samples = true,
|
||||
.ordering_requires_timestamps = true,
|
||||
},
|
||||
static struct perf_tool perf_script = {
|
||||
.sample = process_sample_event,
|
||||
.mmap = perf_event__process_mmap,
|
||||
.comm = perf_event__process_comm,
|
||||
.exit = perf_event__process_task,
|
||||
.fork = perf_event__process_task,
|
||||
.attr = perf_event__process_attr,
|
||||
.event_type = perf_event__process_event_type,
|
||||
.tracing_data = perf_event__process_tracing_data,
|
||||
.build_id = perf_event__process_build_id,
|
||||
.ordered_samples = true,
|
||||
.ordering_requires_timestamps = true,
|
||||
};
|
||||
|
||||
extern volatile int session_done;
|
||||
|
||||
static void sig_handler(int sig __unused)
|
||||
static void sig_handler(int sig __maybe_unused)
|
||||
{
|
||||
session_done = 1;
|
||||
}
|
||||
@ -573,7 +546,7 @@ static int __cmd_script(struct perf_session *session)
|
||||
|
||||
signal(SIGINT, sig_handler);
|
||||
|
||||
ret = perf_session__process_events(session, &perf_script.tool);
|
||||
ret = perf_session__process_events(session, &perf_script);
|
||||
|
||||
if (debug_mode)
|
||||
pr_err("Misordered timestamps: %" PRIu64 "\n", nr_unordered);
|
||||
@ -672,8 +645,8 @@ static void list_available_languages(void)
|
||||
fprintf(stderr, "\n");
|
||||
}
|
||||
|
||||
static int parse_scriptname(const struct option *opt __used,
|
||||
const char *str, int unset __used)
|
||||
static int parse_scriptname(const struct option *opt __maybe_unused,
|
||||
const char *str, int unset __maybe_unused)
|
||||
{
|
||||
char spec[PATH_MAX];
|
||||
const char *script, *ext;
|
||||
@ -718,8 +691,8 @@ static int parse_scriptname(const struct option *opt __used,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int parse_output_fields(const struct option *opt __used,
|
||||
const char *arg, int unset __used)
|
||||
static int parse_output_fields(const struct option *opt __maybe_unused,
|
||||
const char *arg, int unset __maybe_unused)
|
||||
{
|
||||
char *tok;
|
||||
int i, imax = sizeof(all_output_options) / sizeof(struct output_option);
|
||||
@ -1010,8 +983,9 @@ static char *get_script_root(struct dirent *script_dirent, const char *suffix)
|
||||
return script_root;
|
||||
}
|
||||
|
||||
static int list_available_scripts(const struct option *opt __used,
|
||||
const char *s __used, int unset __used)
|
||||
static int list_available_scripts(const struct option *opt __maybe_unused,
|
||||
const char *s __maybe_unused,
|
||||
int unset __maybe_unused)
|
||||
{
|
||||
struct dirent *script_next, *lang_next, script_dirent, lang_dirent;
|
||||
char scripts_path[MAXPATHLEN];
|
||||
@ -1058,6 +1032,61 @@ static int list_available_scripts(const struct option *opt __used,
|
||||
exit(0);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return -1 if none is found, otherwise the actual scripts number.
|
||||
*
|
||||
* Currently the only user of this function is the script browser, which
|
||||
* will list all statically runnable scripts, select one, execute it and
|
||||
* show the output in a perf browser.
|
||||
*/
|
||||
int find_scripts(char **scripts_array, char **scripts_path_array)
|
||||
{
|
||||
struct dirent *script_next, *lang_next, script_dirent, lang_dirent;
|
||||
char scripts_path[MAXPATHLEN];
|
||||
DIR *scripts_dir, *lang_dir;
|
||||
char lang_path[MAXPATHLEN];
|
||||
char *temp;
|
||||
int i = 0;
|
||||
|
||||
snprintf(scripts_path, MAXPATHLEN, "%s/scripts", perf_exec_path());
|
||||
|
||||
scripts_dir = opendir(scripts_path);
|
||||
if (!scripts_dir)
|
||||
return -1;
|
||||
|
||||
for_each_lang(scripts_path, scripts_dir, lang_dirent, lang_next) {
|
||||
snprintf(lang_path, MAXPATHLEN, "%s/%s", scripts_path,
|
||||
lang_dirent.d_name);
|
||||
#ifdef NO_LIBPERL
|
||||
if (strstr(lang_path, "perl"))
|
||||
continue;
|
||||
#endif
|
||||
#ifdef NO_LIBPYTHON
|
||||
if (strstr(lang_path, "python"))
|
||||
continue;
|
||||
#endif
|
||||
|
||||
lang_dir = opendir(lang_path);
|
||||
if (!lang_dir)
|
||||
continue;
|
||||
|
||||
for_each_script(lang_path, lang_dir, script_dirent, script_next) {
|
||||
/* Skip those real time scripts: xxxtop.p[yl] */
|
||||
if (strstr(script_dirent.d_name, "top."))
|
||||
continue;
|
||||
sprintf(scripts_path_array[i], "%s/%s", lang_path,
|
||||
script_dirent.d_name);
|
||||
temp = strchr(script_dirent.d_name, '.');
|
||||
snprintf(scripts_array[i],
|
||||
(temp - script_dirent.d_name) + 1,
|
||||
"%s", script_dirent.d_name);
|
||||
i++;
|
||||
}
|
||||
}
|
||||
|
||||
return i;
|
||||
}
|
||||
|
||||
static char *get_script_path(const char *script_root, const char *suffix)
|
||||
{
|
||||
struct dirent *script_next, *lang_next, script_dirent, lang_dirent;
|
||||
@ -1170,6 +1199,8 @@ static const struct option options[] = {
|
||||
parse_output_fields),
|
||||
OPT_BOOLEAN('a', "all-cpus", &system_wide,
|
||||
"system-wide collection from all CPUs"),
|
||||
OPT_STRING('S', "symbols", &symbol_conf.sym_list_str, "symbol[,symbol...]",
|
||||
"only consider these symbols"),
|
||||
OPT_STRING('C', "cpu", &cpu_list, "cpu", "list of cpus to profile"),
|
||||
OPT_STRING('c', "comms", &symbol_conf.comm_list_str, "comm[,comm...]",
|
||||
"only display events for these comms"),
|
||||
@ -1181,21 +1212,26 @@ static const struct option options[] = {
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
static bool have_cmd(int argc, const char **argv)
|
||||
static int have_cmd(int argc, const char **argv)
|
||||
{
|
||||
char **__argv = malloc(sizeof(const char *) * argc);
|
||||
|
||||
if (!__argv)
|
||||
die("malloc");
|
||||
if (!__argv) {
|
||||
pr_err("malloc failed\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
memcpy(__argv, argv, sizeof(const char *) * argc);
|
||||
argc = parse_options(argc, (const char **)__argv, record_options,
|
||||
NULL, PARSE_OPT_STOP_AT_NON_OPTION);
|
||||
free(__argv);
|
||||
|
||||
return argc != 0;
|
||||
system_wide = (argc == 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cmd_script(int argc, const char **argv, const char *prefix __used)
|
||||
int cmd_script(int argc, const char **argv, const char *prefix __maybe_unused)
|
||||
{
|
||||
char *rec_script_path = NULL;
|
||||
char *rep_script_path = NULL;
|
||||
@ -1259,13 +1295,13 @@ int cmd_script(int argc, const char **argv, const char *prefix __used)
|
||||
|
||||
if (pipe(live_pipe) < 0) {
|
||||
perror("failed to create pipe");
|
||||
exit(-1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
pid = fork();
|
||||
if (pid < 0) {
|
||||
perror("failed to fork");
|
||||
exit(-1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (!pid) {
|
||||
@ -1277,13 +1313,18 @@ int cmd_script(int argc, const char **argv, const char *prefix __used)
|
||||
if (is_top_script(argv[0])) {
|
||||
system_wide = true;
|
||||
} else if (!system_wide) {
|
||||
system_wide = !have_cmd(argc - rep_args,
|
||||
&argv[rep_args]);
|
||||
if (have_cmd(argc - rep_args, &argv[rep_args]) != 0) {
|
||||
err = -1;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
__argv = malloc((argc + 6) * sizeof(const char *));
|
||||
if (!__argv)
|
||||
die("malloc");
|
||||
if (!__argv) {
|
||||
pr_err("malloc failed\n");
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
__argv[j++] = "/bin/sh";
|
||||
__argv[j++] = rec_script_path;
|
||||
@ -1305,8 +1346,12 @@ int cmd_script(int argc, const char **argv, const char *prefix __used)
|
||||
close(live_pipe[1]);
|
||||
|
||||
__argv = malloc((argc + 4) * sizeof(const char *));
|
||||
if (!__argv)
|
||||
die("malloc");
|
||||
if (!__argv) {
|
||||
pr_err("malloc failed\n");
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
j = 0;
|
||||
__argv[j++] = "/bin/sh";
|
||||
__argv[j++] = rep_script_path;
|
||||
@ -1331,12 +1376,20 @@ int cmd_script(int argc, const char **argv, const char *prefix __used)
|
||||
|
||||
if (!rec_script_path)
|
||||
system_wide = false;
|
||||
else if (!system_wide)
|
||||
system_wide = !have_cmd(argc - 1, &argv[1]);
|
||||
else if (!system_wide) {
|
||||
if (have_cmd(argc - 1, &argv[1]) != 0) {
|
||||
err = -1;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
__argv = malloc((argc + 2) * sizeof(const char *));
|
||||
if (!__argv)
|
||||
die("malloc");
|
||||
if (!__argv) {
|
||||
pr_err("malloc failed\n");
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
__argv[j++] = "/bin/sh";
|
||||
__argv[j++] = script_path;
|
||||
if (system_wide)
|
||||
@ -1356,12 +1409,10 @@ int cmd_script(int argc, const char **argv, const char *prefix __used)
|
||||
setup_pager();
|
||||
|
||||
session = perf_session__new(input_name, O_RDONLY, 0, false,
|
||||
&perf_script.tool);
|
||||
&perf_script);
|
||||
if (session == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
perf_script.session = session;
|
||||
|
||||
if (cpu_list) {
|
||||
if (perf_session__cpu_bitmap(session, cpu_list, cpu_bitmap))
|
||||
return -1;
|
||||
@ -1387,18 +1438,18 @@ int cmd_script(int argc, const char **argv, const char *prefix __used)
|
||||
input = open(session->filename, O_RDONLY); /* input_name */
|
||||
if (input < 0) {
|
||||
perror("failed to open file");
|
||||
exit(-1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
err = fstat(input, &perf_stat);
|
||||
if (err < 0) {
|
||||
perror("failed to stat file");
|
||||
exit(-1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (!perf_stat.st_size) {
|
||||
fprintf(stderr, "zero-sized file, nothing to do!\n");
|
||||
exit(0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
scripting_ops = script_spec__lookup(generate_script_lang);
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user