ftrace: Rewrite of function graph tracer

Up until now, the function graph tracer could only have a single user
 attached to it. If another user tried to attach to the function graph
 tracer while one was already attached, it would fail. Allowing function
 graph tracer to have more than one user has been asked for since 2009, but
 it required a rewrite to the logic to pull it off so it never happened.
 Until now!
 
 There's three systems that trace the return of a function. That is
 kretprobes, function graph tracer, and BPF. kretprobes and function graph
 tracing both do it similarly. The difference is that kretprobes uses a
 shadow stack per callback and function graph tracer creates a shadow stack
 for all tasks. The function graph tracer method makes it possible to trace
 the return of all functions. As kretprobes now needs that feature too,
 allowing it to use function graph tracer was needed. BPF also wants to
 trace the return of many probes and its method doesn't scale either.
 Having it use function graph tracer would improve that.
 
 By allowing function graph tracer to have multiple users allows both
 kretprobes and BPF to use function graph tracer in these cases. This will
 allow kretprobes code to be removed in the future as it's version will no
 longer be needed. Note, function graph tracer is only limited to 16
 simultaneous users, due to shadow stack size and allocated slots.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCZpbWlxQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qgtvAP9jxmgEiEhz4Bpe1vRKVSMYK6ozXHTT
 7MFKRMeQqQ8zeAEA2sD5Zrt9l7zKzg0DFpaDLgc3/yh14afIDxzTlIvkmQ8=
 =umuf
 -----END PGP SIGNATURE-----

Merge tag 'ftrace-v6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull ftrace updates from Steven Rostedt:
 "Rewrite of function graph tracer to allow multiple users

  Up until now, the function graph tracer could only have a single user
  attached to it. If another user tried to attach to the function graph
  tracer while one was already attached, it would fail. Allowing
  function graph tracer to have more than one user has been asked for
  since 2009, but it required a rewrite to the logic to pull it off so
  it never happened. Until now!

  There's three systems that trace the return of a function. That is
  kretprobes, function graph tracer, and BPF. kretprobes and function
  graph tracing both do it similarly. The difference is that kretprobes
  uses a shadow stack per callback and function graph tracer creates a
  shadow stack for all tasks. The function graph tracer method makes it
  possible to trace the return of all functions. As kretprobes now needs
  that feature too, allowing it to use function graph tracer was needed.
  BPF also wants to trace the return of many probes and its method
  doesn't scale either. Having it use function graph tracer would
  improve that.

  By allowing function graph tracer to have multiple users allows both
  kretprobes and BPF to use function graph tracer in these cases. This
  will allow kretprobes code to be removed in the future as it's version
  will no longer be needed.

  Note, function graph tracer is only limited to 16 simultaneous users,
  due to shadow stack size and allocated slots"

* tag 'ftrace-v6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (49 commits)
  fgraph: Use str_plural() in test_graph_storage_single()
  function_graph: Add READ_ONCE() when accessing fgraph_array[]
  ftrace: Add missing kerneldoc parameters to unregister_ftrace_direct()
  function_graph: Everyone uses HAVE_FUNCTION_GRAPH_RET_ADDR_PTR, remove it
  function_graph: Fix up ftrace_graph_ret_addr()
  function_graph: Make fgraph_update_pid_func() a stub for !DYNAMIC_FTRACE
  function_graph: Rename BYTE_NUMBER to CHAR_NUMBER in selftests
  fgraph: Remove some unused functions
  ftrace: Hide one more entry in stack trace when ftrace_pid is enabled
  function_graph: Do not update pid func if CONFIG_DYNAMIC_FTRACE not enabled
  function_graph: Make fgraph_do_direct static key static
  ftrace: Fix prototypes for ftrace_startup/shutdown_subops()
  ftrace: Assign RCU list variable with rcu_assign_ptr()
  ftrace: Assign ftrace_list_end to ftrace_ops_list type cast to RCU
  ftrace: Declare function_trace_op in header to quiet sparse warning
  ftrace: Add comments to ftrace_hash_move() and friends
  ftrace: Convert "inc" parameter to bool in ftrace_hash_rec_update_modify()
  ftrace: Add comments to ftrace_hash_rec_disable/enable()
  ftrace: Remove "filter_hash" parameter from __ftrace_hash_rec_update()
  ftrace: Rename dup_hash() and comment it
  ...
This commit is contained in:
Linus Torvalds 2024-07-18 13:36:33 -07:00
commit 70045bfc4c
22 changed files with 2058 additions and 444 deletions

View File

@ -217,18 +217,6 @@ along to ftrace_push_return_trace() instead of a stub value of 0.
Similarly, when you call ftrace_return_to_handler(), pass it the frame pointer.
HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
--------------------------------
An arch may pass in a pointer to the return address on the stack. This
prevents potential stack unwinding issues where the unwinder gets out of
sync with ret_stack and the wrong addresses are reported by
ftrace_graph_ret_addr().
Adding support for it is easy: just define the macro in asm/ftrace.h and
pass the return address pointer as the 'retp' argument to
ftrace_push_return_trace().
HAVE_SYSCALL_TRACEPOINTS
------------------------

View File

@ -12,17 +12,6 @@
#define HAVE_FUNCTION_GRAPH_FP_TEST
/*
* HAVE_FUNCTION_GRAPH_RET_ADDR_PTR means that the architecture can provide a
* "return address pointer" which can be used to uniquely identify a return
* address which has been overwritten.
*
* On arm64 we use the address of the caller's frame record, which remains the
* same for the lifetime of the instrumented function, unlike the return
* address in the LR.
*/
#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
#define ARCH_SUPPORTS_FTRACE_OPS 1
#else

View File

@ -7,8 +7,6 @@
#define HAVE_FUNCTION_GRAPH_FP_TEST
#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
#define ARCH_SUPPORTS_FTRACE_OPS 1
#define MCOUNT_ADDR ((unsigned long)_mcount)

View File

@ -28,7 +28,6 @@ struct dyn_ftrace;
struct dyn_arch_ftrace { };
#define ARCH_SUPPORTS_FTRACE_OPS 1
#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
#define ftrace_init_nop ftrace_init_nop
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);

View File

@ -8,8 +8,6 @@
#define MCOUNT_ADDR ((unsigned long)(_mcount))
#define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */
#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
/* Ignore unused weak functions which will have larger offsets */
#if defined(CONFIG_MPROFILE_KERNEL) || defined(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)
#define FTRACE_MCOUNT_MAX_OFFSET 16

View File

@ -11,7 +11,6 @@
#if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_FRAME_POINTER)
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif
#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__

View File

@ -2,7 +2,6 @@
#ifndef _ASM_S390_FTRACE_H
#define _ASM_S390_FTRACE_H
#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
#define ARCH_SUPPORTS_FTRACE_OPS 1
#define MCOUNT_INSN_SIZE 6

View File

@ -20,8 +20,6 @@
#define ARCH_SUPPORTS_FTRACE_OPS 1
#endif
#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
#ifndef __ASSEMBLY__
extern void __fentry__(void);

View File

@ -227,6 +227,7 @@ ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops);
* ftrace_enabled.
* DIRECT - Used by the direct ftrace_ops helper for direct functions
* (internal ftrace only, should not be used by others)
* SUBOP - Is controlled by another op in field managed.
*/
enum {
FTRACE_OPS_FL_ENABLED = BIT(0),
@ -247,6 +248,7 @@ enum {
FTRACE_OPS_FL_TRACE_ARRAY = BIT(15),
FTRACE_OPS_FL_PERMANENT = BIT(16),
FTRACE_OPS_FL_DIRECT = BIT(17),
FTRACE_OPS_FL_SUBOP = BIT(18),
};
#ifndef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
@ -334,7 +336,9 @@ struct ftrace_ops {
unsigned long trampoline;
unsigned long trampoline_size;
struct list_head list;
struct list_head subop_list;
ftrace_ops_func_t ops_func;
struct ftrace_ops *managed;
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
unsigned long direct_call;
#endif
@ -509,6 +513,15 @@ static inline void stack_tracer_disable(void) { }
static inline void stack_tracer_enable(void) { }
#endif
enum {
FTRACE_UPDATE_CALLS = (1 << 0),
FTRACE_DISABLE_CALLS = (1 << 1),
FTRACE_UPDATE_TRACE_FUNC = (1 << 2),
FTRACE_START_FUNC_RET = (1 << 3),
FTRACE_STOP_FUNC_RET = (1 << 4),
FTRACE_MAY_SLEEP = (1 << 5),
};
#ifdef CONFIG_DYNAMIC_FTRACE
void ftrace_arch_code_modify_prepare(void);
@ -603,15 +616,6 @@ void ftrace_set_global_notrace(unsigned char *buf, int len, int reset);
void ftrace_free_filter(struct ftrace_ops *ops);
void ftrace_ops_set_global_filter(struct ftrace_ops *ops);
enum {
FTRACE_UPDATE_CALLS = (1 << 0),
FTRACE_DISABLE_CALLS = (1 << 1),
FTRACE_UPDATE_TRACE_FUNC = (1 << 2),
FTRACE_START_FUNC_RET = (1 << 3),
FTRACE_STOP_FUNC_RET = (1 << 4),
FTRACE_MAY_SLEEP = (1 << 5),
};
/*
* The FTRACE_UPDATE_* enum is used to pass information back
* from the ftrace_update_record() and ftrace_test_record()
@ -1027,19 +1031,31 @@ struct ftrace_graph_ret {
unsigned long long rettime;
} __packed;
/* Type of the callback handlers for tracing function graph*/
typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return */
typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *); /* entry */
struct fgraph_ops;
extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace);
/* Type of the callback handlers for tracing function graph*/
typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *,
struct fgraph_ops *); /* return */
typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *,
struct fgraph_ops *); /* entry */
extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, struct fgraph_ops *gops);
bool ftrace_pids_enabled(struct ftrace_ops *ops);
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ops {
trace_func_graph_ent_t entryfunc;
trace_func_graph_ret_t retfunc;
struct ftrace_ops ops; /* for the hash lists */
void *private;
trace_func_graph_ent_t saved_func;
int idx;
};
void *fgraph_reserve_data(int idx, int size_bytes);
void *fgraph_retrieve_data(int idx, int *size_bytes);
/*
* Stack of return addresses for functions
* of a thread.
@ -1055,9 +1071,7 @@ struct ftrace_ret_stack {
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
unsigned long fp;
#endif
#ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
unsigned long *retp;
#endif
};
/*
@ -1072,10 +1086,11 @@ function_graph_enter(unsigned long ret, unsigned long func,
unsigned long frame_pointer, unsigned long *retp);
struct ftrace_ret_stack *
ftrace_graph_get_ret_stack(struct task_struct *task, int idx);
ftrace_graph_get_ret_stack(struct task_struct *task, int skip);
unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx,
unsigned long ret, unsigned long *retp);
unsigned long *fgraph_get_task_var(struct fgraph_ops *gops);
/*
* Sometimes we don't want to trace a function with the function
@ -1114,6 +1129,9 @@ extern void ftrace_graph_init_task(struct task_struct *t);
extern void ftrace_graph_exit_task(struct task_struct *t);
extern void ftrace_graph_init_idle_task(struct task_struct *t, int cpu);
/* Used by assembly, but to quiet sparse warnings */
extern struct ftrace_ops *function_trace_op;
static inline void pause_graph_tracing(void)
{
atomic_inc(&current->tracing_graph_pause);

View File

@ -1413,7 +1413,7 @@ struct task_struct {
int curr_ret_depth;
/* Stack of return addresses for return function tracing: */
struct ftrace_ret_stack *ret_stack;
unsigned long *ret_stack;
/* Timestamp for last schedule: */
unsigned long long ftrace_timestamp;

View File

@ -44,35 +44,6 @@ enum {
*/
TRACE_IRQ_BIT,
/* Set if the function is in the set_graph_function file */
TRACE_GRAPH_BIT,
/*
* In the very unlikely case that an interrupt came in
* at a start of graph tracing, and we want to trace
* the function in that interrupt, the depth can be greater
* than zero, because of the preempted start of a previous
* trace. In an even more unlikely case, depth could be 2
* if a softirq interrupted the start of graph tracing,
* followed by an interrupt preempting a start of graph
* tracing in the softirq, and depth can even be 3
* if an NMI came in at the start of an interrupt function
* that preempted a softirq start of a function that
* preempted normal context!!!! Luckily, it can't be
* greater than 3, so the next two bits are a mask
* of what the depth is when we set TRACE_GRAPH_BIT
*/
TRACE_GRAPH_DEPTH_START_BIT,
TRACE_GRAPH_DEPTH_END_BIT,
/*
* To implement set_graph_notrace, if this bit is set, we ignore
* function graph tracing of called functions, until the return
* function is called to clear it.
*/
TRACE_GRAPH_NOTRACE_BIT,
/* Used to prevent recursion recording from recursing. */
TRACE_RECORD_RECURSION_BIT,
};
@ -81,16 +52,6 @@ enum {
#define trace_recursion_clear(bit) do { (current)->trace_recursion &= ~(1<<(bit)); } while (0)
#define trace_recursion_test(bit) ((current)->trace_recursion & (1<<(bit)))
#define trace_recursion_depth() \
(((current)->trace_recursion >> TRACE_GRAPH_DEPTH_START_BIT) & 3)
#define trace_recursion_set_depth(depth) \
do { \
current->trace_recursion &= \
~(3 << TRACE_GRAPH_DEPTH_START_BIT); \
current->trace_recursion |= \
((depth) & 3) << TRACE_GRAPH_DEPTH_START_BIT; \
} while (0)
#define TRACE_CONTEXT_BITS 4
#define TRACE_FTRACE_START TRACE_FTRACE_BIT

File diff suppressed because it is too large Load Diff

View File

@ -74,7 +74,8 @@
#ifdef CONFIG_DYNAMIC_FTRACE
#define INIT_OPS_HASH(opsname) \
.func_hash = &opsname.local_hash, \
.local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock),
.local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock), \
.subop_list = LIST_HEAD_INIT(opsname.subop_list),
#else
#define INIT_OPS_HASH(opsname)
#endif
@ -99,7 +100,7 @@ struct ftrace_ops *function_trace_op __read_mostly = &ftrace_list_end;
/* What to set function_trace_op to */
static struct ftrace_ops *set_function_trace_op;
static bool ftrace_pids_enabled(struct ftrace_ops *ops)
bool ftrace_pids_enabled(struct ftrace_ops *ops)
{
struct trace_array *tr;
@ -121,7 +122,7 @@ static int ftrace_disabled __read_mostly;
DEFINE_MUTEX(ftrace_lock);
struct ftrace_ops __rcu *ftrace_ops_list __read_mostly = &ftrace_list_end;
struct ftrace_ops __rcu *ftrace_ops_list __read_mostly = (struct ftrace_ops __rcu *)&ftrace_list_end;
ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
struct ftrace_ops global_ops;
@ -161,12 +162,14 @@ static inline void ftrace_ops_init(struct ftrace_ops *ops)
#ifdef CONFIG_DYNAMIC_FTRACE
if (!(ops->flags & FTRACE_OPS_FL_INITIALIZED)) {
mutex_init(&ops->local_hash.regex_lock);
INIT_LIST_HEAD(&ops->subop_list);
ops->func_hash = &ops->local_hash;
ops->flags |= FTRACE_OPS_FL_INITIALIZED;
}
#endif
}
/* Call this function for when a callback filters on set_ftrace_pid */
static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
@ -235,8 +238,6 @@ static void update_ftrace_function(void)
func = ftrace_ops_list_func;
}
update_function_graph_func();
/* If there's no change, then do nothing more here */
if (ftrace_trace_function == func)
return;
@ -310,7 +311,7 @@ static int remove_ftrace_ops(struct ftrace_ops __rcu **list,
lockdep_is_held(&ftrace_lock)) == ops &&
rcu_dereference_protected(ops->next,
lockdep_is_held(&ftrace_lock)) == &ftrace_list_end) {
*list = &ftrace_list_end;
rcu_assign_pointer(*list, &ftrace_list_end);
return 0;
}
@ -406,6 +407,8 @@ static void ftrace_update_pid_func(void)
}
} while_for_each_ftrace_op(op);
fgraph_update_pid_func();
update_ftrace_function();
}
@ -817,7 +820,8 @@ void ftrace_graph_graph_time_control(bool enable)
fgraph_graph_time = enable;
}
static int profile_graph_entry(struct ftrace_graph_ent *trace)
static int profile_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
{
struct ftrace_ret_stack *ret_stack;
@ -834,7 +838,8 @@ static int profile_graph_entry(struct ftrace_graph_ent *trace)
return 1;
}
static void profile_graph_return(struct ftrace_graph_ret *trace)
static void profile_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
{
struct ftrace_ret_stack *ret_stack;
struct ftrace_profile_stat *stat;
@ -1314,7 +1319,7 @@ static struct ftrace_hash *alloc_ftrace_hash(int size_bits)
return hash;
}
/* Used to save filters on functions for modules not loaded yet */
static int ftrace_add_mod(struct trace_array *tr,
const char *func, const char *module,
int enable)
@ -1380,15 +1385,17 @@ alloc_and_copy_ftrace_hash(int size_bits, struct ftrace_hash *hash)
return NULL;
}
static void
ftrace_hash_rec_disable_modify(struct ftrace_ops *ops, int filter_hash);
static void
ftrace_hash_rec_enable_modify(struct ftrace_ops *ops, int filter_hash);
static void ftrace_hash_rec_disable_modify(struct ftrace_ops *ops);
static void ftrace_hash_rec_enable_modify(struct ftrace_ops *ops);
static int ftrace_hash_ipmodify_update(struct ftrace_ops *ops,
struct ftrace_hash *new_hash);
static struct ftrace_hash *dup_hash(struct ftrace_hash *src, int size)
/*
* Allocate a new hash and remove entries from @src and move them to the new hash.
* On success, the @src hash will be empty and should be freed.
*/
static struct ftrace_hash *__move_hash(struct ftrace_hash *src, int size)
{
struct ftrace_func_entry *entry;
struct ftrace_hash *new_hash;
@ -1424,6 +1431,7 @@ static struct ftrace_hash *dup_hash(struct ftrace_hash *src, int size)
return new_hash;
}
/* Move the @src entries to a newly allocated hash */
static struct ftrace_hash *
__ftrace_hash_move(struct ftrace_hash *src)
{
@ -1435,9 +1443,29 @@ __ftrace_hash_move(struct ftrace_hash *src)
if (ftrace_hash_empty(src))
return EMPTY_HASH;
return dup_hash(src, size);
return __move_hash(src, size);
}
/**
* ftrace_hash_move - move a new hash to a filter and do updates
* @ops: The ops with the hash that @dst points to
* @enable: True if for the filter hash, false for the notrace hash
* @dst: Points to the @ops hash that should be updated
* @src: The hash to update @dst with
*
* This is called when an ftrace_ops hash is being updated and the
* the kernel needs to reflect this. Note, this only updates the kernel
* function callbacks if the @ops is enabled (not to be confused with
* @enable above). If the @ops is enabled, its hash determines what
* callbacks get called. This function gets called when the @ops hash
* is updated and it requires new callbacks.
*
* On success the elements of @src is moved to @dst, and @dst is updated
* properly, as well as the functions determined by the @ops hashes
* are now calling the @ops callback function.
*
* Regardless of return type, @src should be freed with free_ftrace_hash().
*/
static int
ftrace_hash_move(struct ftrace_ops *ops, int enable,
struct ftrace_hash **dst, struct ftrace_hash *src)
@ -1467,11 +1495,11 @@ ftrace_hash_move(struct ftrace_ops *ops, int enable,
* Remove the current set, update the hash and add
* them back.
*/
ftrace_hash_rec_disable_modify(ops, enable);
ftrace_hash_rec_disable_modify(ops);
rcu_assign_pointer(*dst, new_hash);
ftrace_hash_rec_enable_modify(ops, enable);
ftrace_hash_rec_enable_modify(ops);
return 0;
}
@ -1694,12 +1722,21 @@ static bool skip_record(struct dyn_ftrace *rec)
!(rec->flags & FTRACE_FL_ENABLED);
}
/*
* This is the main engine to the ftrace updates to the dyn_ftrace records.
*
* It will iterate through all the available ftrace functions
* (the ones that ftrace can have callbacks to) and set the flags
* in the associated dyn_ftrace records.
*
* @inc: If true, the functions associated to @ops are added to
* the dyn_ftrace records, otherwise they are removed.
*/
static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
int filter_hash,
bool inc)
{
struct ftrace_hash *hash;
struct ftrace_hash *other_hash;
struct ftrace_hash *notrace_hash;
struct ftrace_page *pg;
struct dyn_ftrace *rec;
bool update = false;
@ -1711,35 +1748,16 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
return false;
/*
* In the filter_hash case:
* If the count is zero, we update all records.
* Otherwise we just update the items in the hash.
*
* In the notrace_hash case:
* We enable the update in the hash.
* As disabling notrace means enabling the tracing,
* and enabling notrace means disabling, the inc variable
* gets inversed.
*/
if (filter_hash) {
hash = ops->func_hash->filter_hash;
other_hash = ops->func_hash->notrace_hash;
if (ftrace_hash_empty(hash))
all = true;
} else {
inc = !inc;
hash = ops->func_hash->notrace_hash;
other_hash = ops->func_hash->filter_hash;
/*
* If the notrace hash has no items,
* then there's nothing to do.
*/
if (ftrace_hash_empty(hash))
return false;
}
hash = ops->func_hash->filter_hash;
notrace_hash = ops->func_hash->notrace_hash;
if (ftrace_hash_empty(hash))
all = true;
do_for_each_ftrace_rec(pg, rec) {
int in_other_hash = 0;
int in_notrace_hash = 0;
int in_hash = 0;
int match = 0;
@ -1751,26 +1769,17 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
* Only the filter_hash affects all records.
* Update if the record is not in the notrace hash.
*/
if (!other_hash || !ftrace_lookup_ip(other_hash, rec->ip))
if (!notrace_hash || !ftrace_lookup_ip(notrace_hash, rec->ip))
match = 1;
} else {
in_hash = !!ftrace_lookup_ip(hash, rec->ip);
in_other_hash = !!ftrace_lookup_ip(other_hash, rec->ip);
in_notrace_hash = !!ftrace_lookup_ip(notrace_hash, rec->ip);
/*
* If filter_hash is set, we want to match all functions
* that are in the hash but not in the other hash.
*
* If filter_hash is not set, then we are decrementing.
* That means we match anything that is in the hash
* and also in the other_hash. That is, we need to turn
* off functions in the other hash because they are disabled
* by this hash.
* We want to match all functions that are in the hash but
* not in the other hash.
*/
if (filter_hash && in_hash && !in_other_hash)
match = 1;
else if (!filter_hash && in_hash &&
(in_other_hash || ftrace_hash_empty(other_hash)))
if (in_hash && !in_notrace_hash)
match = 1;
}
if (!match)
@ -1876,24 +1885,48 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
return update;
}
static bool ftrace_hash_rec_disable(struct ftrace_ops *ops,
int filter_hash)
/*
* This is called when an ops is removed from tracing. It will decrement
* the counters of the dyn_ftrace records for all the functions that
* the @ops attached to.
*/
static bool ftrace_hash_rec_disable(struct ftrace_ops *ops)
{
return __ftrace_hash_rec_update(ops, filter_hash, 0);
return __ftrace_hash_rec_update(ops, false);
}
static bool ftrace_hash_rec_enable(struct ftrace_ops *ops,
int filter_hash)
/*
* This is called when an ops is added to tracing. It will increment
* the counters of the dyn_ftrace records for all the functions that
* the @ops attached to.
*/
static bool ftrace_hash_rec_enable(struct ftrace_ops *ops)
{
return __ftrace_hash_rec_update(ops, filter_hash, 1);
return __ftrace_hash_rec_update(ops, true);
}
static void ftrace_hash_rec_update_modify(struct ftrace_ops *ops,
int filter_hash, int inc)
/*
* This function will update what functions @ops traces when its filter
* changes.
*
* The @inc states if the @ops callbacks are going to be added or removed.
* When one of the @ops hashes are updated to a "new_hash" the dyn_ftrace
* records are update via:
*
* ftrace_hash_rec_disable_modify(ops);
* ops->hash = new_hash
* ftrace_hash_rec_enable_modify(ops);
*
* Where the @ops is removed from all the records it is tracing using
* its old hash. The @ops hash is updated to the new hash, and then
* the @ops is added back to the records so that it is tracing all
* the new functions.
*/
static void ftrace_hash_rec_update_modify(struct ftrace_ops *ops, bool inc)
{
struct ftrace_ops *op;
__ftrace_hash_rec_update(ops, filter_hash, inc);
__ftrace_hash_rec_update(ops, inc);
if (ops->func_hash != &global_ops.local_hash)
return;
@ -1907,20 +1940,18 @@ static void ftrace_hash_rec_update_modify(struct ftrace_ops *ops,
if (op == ops)
continue;
if (op->func_hash == &global_ops.local_hash)
__ftrace_hash_rec_update(op, filter_hash, inc);
__ftrace_hash_rec_update(op, inc);
} while_for_each_ftrace_op(op);
}
static void ftrace_hash_rec_disable_modify(struct ftrace_ops *ops,
int filter_hash)
static void ftrace_hash_rec_disable_modify(struct ftrace_ops *ops)
{
ftrace_hash_rec_update_modify(ops, filter_hash, 0);
ftrace_hash_rec_update_modify(ops, false);
}
static void ftrace_hash_rec_enable_modify(struct ftrace_ops *ops,
int filter_hash)
static void ftrace_hash_rec_enable_modify(struct ftrace_ops *ops)
{
ftrace_hash_rec_update_modify(ops, filter_hash, 1);
ftrace_hash_rec_update_modify(ops, true);
}
/*
@ -3043,7 +3074,7 @@ int ftrace_startup(struct ftrace_ops *ops, int command)
return ret;
}
if (ftrace_hash_rec_enable(ops, 1))
if (ftrace_hash_rec_enable(ops))
command |= FTRACE_UPDATE_CALLS;
ftrace_startup_enable(command);
@ -3085,7 +3116,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
/* Disabling ipmodify never fails */
ftrace_hash_ipmodify_disable(ops);
if (ftrace_hash_rec_disable(ops, 1))
if (ftrace_hash_rec_disable(ops))
command |= FTRACE_UPDATE_CALLS;
ops->flags &= ~FTRACE_OPS_FL_ENABLED;
@ -3164,6 +3195,474 @@ out:
return 0;
}
/* Simply make a copy of @src and return it */
static struct ftrace_hash *copy_hash(struct ftrace_hash *src)
{
if (ftrace_hash_empty(src))
return EMPTY_HASH;
return alloc_and_copy_ftrace_hash(src->size_bits, src);
}
/*
* Append @new_hash entries to @hash:
*
* If @hash is the EMPTY_HASH then it traces all functions and nothing
* needs to be done.
*
* If @new_hash is the EMPTY_HASH, then make *hash the EMPTY_HASH so
* that it traces everything.
*
* Otherwise, go through all of @new_hash and add anything that @hash
* doesn't already have, to @hash.
*
* The filter_hash updates uses just the append_hash() function
* and the notrace_hash does not.
*/
static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash)
{
struct ftrace_func_entry *entry;
int size;
int i;
/* An empty hash does everything */
if (ftrace_hash_empty(*hash))
return 0;
/* If new_hash has everything make hash have everything */
if (ftrace_hash_empty(new_hash)) {
free_ftrace_hash(*hash);
*hash = EMPTY_HASH;
return 0;
}
size = 1 << new_hash->size_bits;
for (i = 0; i < size; i++) {
hlist_for_each_entry(entry, &new_hash->buckets[i], hlist) {
/* Only add if not already in hash */
if (!__ftrace_lookup_ip(*hash, entry->ip) &&
add_hash_entry(*hash, entry->ip) == NULL)
return -ENOMEM;
}
}
return 0;
}
/*
* Add to @hash only those that are in both @new_hash1 and @new_hash2
*
* The notrace_hash updates uses just the intersect_hash() function
* and the filter_hash does not.
*/
static int intersect_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash1,
struct ftrace_hash *new_hash2)
{
struct ftrace_func_entry *entry;
int size;
int i;
/*
* If new_hash1 or new_hash2 is the EMPTY_HASH then make the hash
* empty as well as empty for notrace means none are notraced.
*/
if (ftrace_hash_empty(new_hash1) || ftrace_hash_empty(new_hash2)) {
free_ftrace_hash(*hash);
*hash = EMPTY_HASH;
return 0;
}
size = 1 << new_hash1->size_bits;
for (i = 0; i < size; i++) {
hlist_for_each_entry(entry, &new_hash1->buckets[i], hlist) {
/* Only add if in both @new_hash1 and @new_hash2 */
if (__ftrace_lookup_ip(new_hash2, entry->ip) &&
add_hash_entry(*hash, entry->ip) == NULL)
return -ENOMEM;
}
}
/* If nothing intersects, make it the empty set */
if (ftrace_hash_empty(*hash)) {
free_ftrace_hash(*hash);
*hash = EMPTY_HASH;
}
return 0;
}
/* Return a new hash that has a union of all @ops->filter_hash entries */
static struct ftrace_hash *append_hashes(struct ftrace_ops *ops)
{
struct ftrace_hash *new_hash;
struct ftrace_ops *subops;
int ret;
new_hash = alloc_ftrace_hash(ops->func_hash->filter_hash->size_bits);
if (!new_hash)
return NULL;
list_for_each_entry(subops, &ops->subop_list, list) {
ret = append_hash(&new_hash, subops->func_hash->filter_hash);
if (ret < 0) {
free_ftrace_hash(new_hash);
return NULL;
}
/* Nothing more to do if new_hash is empty */
if (ftrace_hash_empty(new_hash))
break;
}
return new_hash;
}
/* Make @ops trace evenything except what all its subops do not trace */
static struct ftrace_hash *intersect_hashes(struct ftrace_ops *ops)
{
struct ftrace_hash *new_hash = NULL;
struct ftrace_ops *subops;
int size_bits;
int ret;
list_for_each_entry(subops, &ops->subop_list, list) {
struct ftrace_hash *next_hash;
if (!new_hash) {
size_bits = subops->func_hash->notrace_hash->size_bits;
new_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->notrace_hash);
if (!new_hash)
return NULL;
continue;
}
size_bits = new_hash->size_bits;
next_hash = new_hash;
new_hash = alloc_ftrace_hash(size_bits);
ret = intersect_hash(&new_hash, next_hash, subops->func_hash->notrace_hash);
free_ftrace_hash(next_hash);
if (ret < 0) {
free_ftrace_hash(new_hash);
return NULL;
}
/* Nothing more to do if new_hash is empty */
if (ftrace_hash_empty(new_hash))
break;
}
return new_hash;
}
static bool ops_equal(struct ftrace_hash *A, struct ftrace_hash *B)
{
struct ftrace_func_entry *entry;
int size;
int i;
if (ftrace_hash_empty(A))
return ftrace_hash_empty(B);
if (ftrace_hash_empty(B))
return ftrace_hash_empty(A);
if (A->count != B->count)
return false;
size = 1 << A->size_bits;
for (i = 0; i < size; i++) {
hlist_for_each_entry(entry, &A->buckets[i], hlist) {
if (!__ftrace_lookup_ip(B, entry->ip))
return false;
}
}
return true;
}
static void ftrace_ops_update_code(struct ftrace_ops *ops,
struct ftrace_ops_hash *old_hash);
static int __ftrace_hash_move_and_update_ops(struct ftrace_ops *ops,
struct ftrace_hash **orig_hash,
struct ftrace_hash *hash,
int enable)
{
struct ftrace_ops_hash old_hash_ops;
struct ftrace_hash *old_hash;
int ret;
old_hash = *orig_hash;
old_hash_ops.filter_hash = ops->func_hash->filter_hash;
old_hash_ops.notrace_hash = ops->func_hash->notrace_hash;
ret = ftrace_hash_move(ops, enable, orig_hash, hash);
if (!ret) {
ftrace_ops_update_code(ops, &old_hash_ops);
free_ftrace_hash_rcu(old_hash);
}
return ret;
}
static int ftrace_update_ops(struct ftrace_ops *ops, struct ftrace_hash *filter_hash,
struct ftrace_hash *notrace_hash)
{
int ret;
if (!ops_equal(filter_hash, ops->func_hash->filter_hash)) {
ret = __ftrace_hash_move_and_update_ops(ops, &ops->func_hash->filter_hash,
filter_hash, 1);
if (ret < 0)
return ret;
}
if (!ops_equal(notrace_hash, ops->func_hash->notrace_hash)) {
ret = __ftrace_hash_move_and_update_ops(ops, &ops->func_hash->notrace_hash,
notrace_hash, 0);
if (ret < 0)
return ret;
}
return 0;
}
/**
* ftrace_startup_subops - enable tracing for subops of an ops
* @ops: Manager ops (used to pick all the functions of its subops)
* @subops: A new ops to add to @ops
* @command: Extra commands to use to enable tracing
*
* The @ops is a manager @ops that has the filter that includes all the functions
* that its list of subops are tracing. Adding a new @subops will add the
* functions of @subops to @ops.
*/
int ftrace_startup_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int command)
{
struct ftrace_hash *filter_hash;
struct ftrace_hash *notrace_hash;
struct ftrace_hash *save_filter_hash;
struct ftrace_hash *save_notrace_hash;
int size_bits;
int ret;
if (unlikely(ftrace_disabled))
return -ENODEV;
ftrace_ops_init(ops);
ftrace_ops_init(subops);
if (WARN_ON_ONCE(subops->flags & FTRACE_OPS_FL_ENABLED))
return -EBUSY;
/* Make everything canonical (Just in case!) */
if (!ops->func_hash->filter_hash)
ops->func_hash->filter_hash = EMPTY_HASH;
if (!ops->func_hash->notrace_hash)
ops->func_hash->notrace_hash = EMPTY_HASH;
if (!subops->func_hash->filter_hash)
subops->func_hash->filter_hash = EMPTY_HASH;
if (!subops->func_hash->notrace_hash)
subops->func_hash->notrace_hash = EMPTY_HASH;
/* For the first subops to ops just enable it normally */
if (list_empty(&ops->subop_list)) {
/* Just use the subops hashes */
filter_hash = copy_hash(subops->func_hash->filter_hash);
notrace_hash = copy_hash(subops->func_hash->notrace_hash);
if (!filter_hash || !notrace_hash) {
free_ftrace_hash(filter_hash);
free_ftrace_hash(notrace_hash);
return -ENOMEM;
}
save_filter_hash = ops->func_hash->filter_hash;
save_notrace_hash = ops->func_hash->notrace_hash;
ops->func_hash->filter_hash = filter_hash;
ops->func_hash->notrace_hash = notrace_hash;
list_add(&subops->list, &ops->subop_list);
ret = ftrace_startup(ops, command);
if (ret < 0) {
list_del(&subops->list);
ops->func_hash->filter_hash = save_filter_hash;
ops->func_hash->notrace_hash = save_notrace_hash;
free_ftrace_hash(filter_hash);
free_ftrace_hash(notrace_hash);
} else {
free_ftrace_hash(save_filter_hash);
free_ftrace_hash(save_notrace_hash);
subops->flags |= FTRACE_OPS_FL_ENABLED | FTRACE_OPS_FL_SUBOP;
subops->managed = ops;
}
return ret;
}
/*
* Here there's already something attached. Here are the rules:
* o If either filter_hash is empty then the final stays empty
* o Otherwise, the final is a superset of both hashes
* o If either notrace_hash is empty then the final stays empty
* o Otherwise, the final is an intersection between the hashes
*/
if (ftrace_hash_empty(ops->func_hash->filter_hash) ||
ftrace_hash_empty(subops->func_hash->filter_hash)) {
filter_hash = EMPTY_HASH;
} else {
size_bits = max(ops->func_hash->filter_hash->size_bits,
subops->func_hash->filter_hash->size_bits);
filter_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->filter_hash);
if (!filter_hash)
return -ENOMEM;
ret = append_hash(&filter_hash, subops->func_hash->filter_hash);
if (ret < 0) {
free_ftrace_hash(filter_hash);
return ret;
}
}
if (ftrace_hash_empty(ops->func_hash->notrace_hash) ||
ftrace_hash_empty(subops->func_hash->notrace_hash)) {
notrace_hash = EMPTY_HASH;
} else {
size_bits = max(ops->func_hash->filter_hash->size_bits,
subops->func_hash->filter_hash->size_bits);
notrace_hash = alloc_ftrace_hash(size_bits);
if (!notrace_hash) {
free_ftrace_hash(filter_hash);
return -ENOMEM;
}
ret = intersect_hash(&notrace_hash, ops->func_hash->filter_hash,
subops->func_hash->filter_hash);
if (ret < 0) {
free_ftrace_hash(filter_hash);
free_ftrace_hash(notrace_hash);
return ret;
}
}
list_add(&subops->list, &ops->subop_list);
ret = ftrace_update_ops(ops, filter_hash, notrace_hash);
free_ftrace_hash(filter_hash);
free_ftrace_hash(notrace_hash);
if (ret < 0) {
list_del(&subops->list);
} else {
subops->flags |= FTRACE_OPS_FL_ENABLED | FTRACE_OPS_FL_SUBOP;
subops->managed = ops;
}
return ret;
}
/**
* ftrace_shutdown_subops - Remove a subops from a manager ops
* @ops: A manager ops to remove @subops from
* @subops: The subops to remove from @ops
* @command: Any extra command flags to add to modifying the text
*
* Removes the functions being traced by the @subops from @ops. Note, it
* will not affect functions that are being traced by other subops that
* still exist in @ops.
*
* If the last subops is removed from @ops, then @ops is shutdown normally.
*/
int ftrace_shutdown_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int command)
{
struct ftrace_hash *filter_hash;
struct ftrace_hash *notrace_hash;
int ret;
if (unlikely(ftrace_disabled))
return -ENODEV;
if (WARN_ON_ONCE(!(subops->flags & FTRACE_OPS_FL_ENABLED)))
return -EINVAL;
list_del(&subops->list);
if (list_empty(&ops->subop_list)) {
/* Last one, just disable the current ops */
ret = ftrace_shutdown(ops, command);
if (ret < 0) {
list_add(&subops->list, &ops->subop_list);
return ret;
}
subops->flags &= ~FTRACE_OPS_FL_ENABLED;
free_ftrace_hash(ops->func_hash->filter_hash);
free_ftrace_hash(ops->func_hash->notrace_hash);
ops->func_hash->filter_hash = EMPTY_HASH;
ops->func_hash->notrace_hash = EMPTY_HASH;
subops->flags &= ~(FTRACE_OPS_FL_ENABLED | FTRACE_OPS_FL_SUBOP);
subops->managed = NULL;
return 0;
}
/* Rebuild the hashes without subops */
filter_hash = append_hashes(ops);
notrace_hash = intersect_hashes(ops);
if (!filter_hash || !notrace_hash) {
free_ftrace_hash(filter_hash);
free_ftrace_hash(notrace_hash);
list_add(&subops->list, &ops->subop_list);
return -ENOMEM;
}
ret = ftrace_update_ops(ops, filter_hash, notrace_hash);
if (ret < 0) {
list_add(&subops->list, &ops->subop_list);
} else {
subops->flags &= ~(FTRACE_OPS_FL_ENABLED | FTRACE_OPS_FL_SUBOP);
subops->managed = NULL;
}
free_ftrace_hash(filter_hash);
free_ftrace_hash(notrace_hash);
return ret;
}
static int ftrace_hash_move_and_update_subops(struct ftrace_ops *subops,
struct ftrace_hash **orig_subhash,
struct ftrace_hash *hash,
int enable)
{
struct ftrace_ops *ops = subops->managed;
struct ftrace_hash **orig_hash;
struct ftrace_hash *save_hash;
struct ftrace_hash *new_hash;
int ret;
/* Manager ops can not be subops (yet) */
if (WARN_ON_ONCE(!ops || ops->flags & FTRACE_OPS_FL_SUBOP))
return -EINVAL;
/* Move the new hash over to the subops hash */
save_hash = *orig_subhash;
*orig_subhash = __ftrace_hash_move(hash);
if (!*orig_subhash) {
*orig_subhash = save_hash;
return -ENOMEM;
}
/* Create a new_hash to hold the ops new functions */
if (enable) {
orig_hash = &ops->func_hash->filter_hash;
new_hash = append_hashes(ops);
} else {
orig_hash = &ops->func_hash->notrace_hash;
new_hash = intersect_hashes(ops);
}
/* Move the hash over to the new hash */
ret = __ftrace_hash_move_and_update_ops(ops, orig_hash, new_hash, enable);
free_ftrace_hash(new_hash);
if (ret) {
/* Put back the original hash */
free_ftrace_hash_rcu(*orig_subhash);
*orig_subhash = save_hash;
} else {
free_ftrace_hash_rcu(save_hash);
}
return ret;
}
static u64 ftrace_update_time;
unsigned long ftrace_update_tot_cnt;
unsigned long ftrace_number_of_pages;
@ -4380,19 +4879,33 @@ static int ftrace_hash_move_and_update_ops(struct ftrace_ops *ops,
struct ftrace_hash *hash,
int enable)
{
struct ftrace_ops_hash old_hash_ops;
struct ftrace_hash *old_hash;
int ret;
if (ops->flags & FTRACE_OPS_FL_SUBOP)
return ftrace_hash_move_and_update_subops(ops, orig_hash, hash, enable);
old_hash = *orig_hash;
old_hash_ops.filter_hash = ops->func_hash->filter_hash;
old_hash_ops.notrace_hash = ops->func_hash->notrace_hash;
ret = ftrace_hash_move(ops, enable, orig_hash, hash);
if (!ret) {
ftrace_ops_update_code(ops, &old_hash_ops);
free_ftrace_hash_rcu(old_hash);
/*
* If this ops is not enabled, it could be sharing its filters
* with a subop. If that's the case, update the subop instead of
* this ops. Shared filters are only allowed to have one ops set
* at a time, and if we update the ops that is not enabled,
* it will not affect subops that share it.
*/
if (!(ops->flags & FTRACE_OPS_FL_ENABLED)) {
struct ftrace_ops *op;
/* Check if any other manager subops maps to this hash */
do_for_each_ftrace_op(op, ftrace_ops_list) {
struct ftrace_ops *subops;
list_for_each_entry(subops, &op->subop_list, list) {
if ((subops->flags & FTRACE_OPS_FL_ENABLED) &&
subops->func_hash == ops->func_hash) {
return ftrace_hash_move_and_update_subops(subops, orig_hash, hash, enable);
}
}
} while_for_each_ftrace_op(op);
}
return ret;
return __ftrace_hash_move_and_update_ops(ops, orig_hash, hash, enable);
}
static bool module_exists(const char *module)
@ -5475,6 +5988,8 @@ EXPORT_SYMBOL_GPL(register_ftrace_direct);
* unregister_ftrace_direct - Remove calls to custom trampoline
* previously registered by register_ftrace_direct for @ops object.
* @ops: The address of the struct ftrace_ops object
* @addr: The address of the direct function that is called by the @ops functions
* @free_filters: Set to true to remove all filters for the ftrace_ops, false otherwise
*
* This is used to remove a direct calls to @addr from the nop locations
* of the functions registered in @ops (with by ftrace_set_filter_ip
@ -7324,6 +7839,7 @@ __init void ftrace_init_global_array_ops(struct trace_array *tr)
tr->ops = &global_ops;
tr->ops->private = tr;
ftrace_init_trace_array(tr);
init_array_fgraph_ops(tr, tr->ops);
}
void ftrace_init_array_ops(struct trace_array *tr, ftrace_func_t func)

View File

@ -15,6 +15,8 @@ extern struct ftrace_ops global_ops;
int ftrace_startup(struct ftrace_ops *ops, int command);
int ftrace_shutdown(struct ftrace_ops *ops, int command);
int ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip, void *regs);
int ftrace_startup_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int command);
int ftrace_shutdown_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int command);
#else /* !CONFIG_DYNAMIC_FTRACE */
@ -38,14 +40,26 @@ ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip, void *regs)
{
return 1;
}
static inline int ftrace_startup_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int command)
{
return -EINVAL;
}
static inline int ftrace_shutdown_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int command)
{
return -EINVAL;
}
#endif /* CONFIG_DYNAMIC_FTRACE */
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
extern int ftrace_graph_active;
void update_function_graph_func(void);
# ifdef CONFIG_DYNAMIC_FTRACE
extern void fgraph_update_pid_func(void);
# else
static inline void fgraph_update_pid_func(void) {}
# endif
#else /* !CONFIG_FUNCTION_GRAPH_TRACER */
# define ftrace_graph_active 0
static inline void update_function_graph_func(void) { }
static inline void fgraph_update_pid_func(void) {}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
#else /* !CONFIG_FUNCTION_TRACER */

View File

@ -397,6 +397,9 @@ struct trace_array {
struct ftrace_ops *ops;
struct trace_pid_list __rcu *function_pids;
struct trace_pid_list __rcu *function_no_pids;
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ops *gops;
#endif
#ifdef CONFIG_DYNAMIC_FTRACE
/* All of these are protected by the ftrace_lock */
struct list_head func_probes;
@ -679,9 +682,8 @@ void trace_latency_header(struct seq_file *m);
void trace_default_header(struct seq_file *m);
void print_trace_header(struct seq_file *m, struct trace_iterator *iter);
void trace_graph_return(struct ftrace_graph_ret *trace);
int trace_graph_entry(struct ftrace_graph_ent *trace);
void set_graph_array(struct trace_array *tr);
void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops);
int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops);
void tracing_start_cmdline_record(void);
void tracing_stop_cmdline_record(void);
@ -892,12 +894,59 @@ extern int __trace_graph_entry(struct trace_array *tr,
extern void __trace_graph_return(struct trace_array *tr,
struct ftrace_graph_ret *trace,
unsigned int trace_ctx);
extern void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops);
extern int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops);
extern void free_fgraph_ops(struct trace_array *tr);
enum {
TRACE_GRAPH_FL = 1,
/*
* In the very unlikely case that an interrupt came in
* at a start of graph tracing, and we want to trace
* the function in that interrupt, the depth can be greater
* than zero, because of the preempted start of a previous
* trace. In an even more unlikely case, depth could be 2
* if a softirq interrupted the start of graph tracing,
* followed by an interrupt preempting a start of graph
* tracing in the softirq, and depth can even be 3
* if an NMI came in at the start of an interrupt function
* that preempted a softirq start of a function that
* preempted normal context!!!! Luckily, it can't be
* greater than 3, so the next two bits are a mask
* of what the depth is when we set TRACE_GRAPH_FL
*/
TRACE_GRAPH_DEPTH_START_BIT,
TRACE_GRAPH_DEPTH_END_BIT,
/*
* To implement set_graph_notrace, if this bit is set, we ignore
* function graph tracing of called functions, until the return
* function is called to clear it.
*/
TRACE_GRAPH_NOTRACE_BIT,
};
#define TRACE_GRAPH_NOTRACE (1 << TRACE_GRAPH_NOTRACE_BIT)
static inline unsigned long ftrace_graph_depth(unsigned long *task_var)
{
return (*task_var >> TRACE_GRAPH_DEPTH_START_BIT) & 3;
}
static inline void ftrace_graph_set_depth(unsigned long *task_var, int depth)
{
*task_var &= ~(3 << TRACE_GRAPH_DEPTH_START_BIT);
*task_var |= (depth & 3) << TRACE_GRAPH_DEPTH_START_BIT;
}
#ifdef CONFIG_DYNAMIC_FTRACE
extern struct ftrace_hash __rcu *ftrace_graph_hash;
extern struct ftrace_hash __rcu *ftrace_graph_notrace_hash;
static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)
static inline int
ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace)
{
unsigned long addr = trace->func;
int ret = 0;
@ -919,13 +968,12 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)
}
if (ftrace_lookup_ip(hash, addr)) {
/*
* This needs to be cleared on the return functions
* when the depth is zero.
*/
trace_recursion_set(TRACE_GRAPH_BIT);
trace_recursion_set_depth(trace->depth);
*task_var |= TRACE_GRAPH_FL;
ftrace_graph_set_depth(task_var, trace->depth);
/*
* If no irqs are to be traced, but a set_graph_function
@ -944,11 +992,14 @@ out:
return ret;
}
static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace)
static inline void
ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace)
{
if (trace_recursion_test(TRACE_GRAPH_BIT) &&
trace->depth == trace_recursion_depth())
trace_recursion_clear(TRACE_GRAPH_BIT);
unsigned long *task_var = fgraph_get_task_var(gops);
if ((*task_var & TRACE_GRAPH_FL) &&
trace->depth == ftrace_graph_depth(task_var))
*task_var &= ~TRACE_GRAPH_FL;
}
static inline int ftrace_graph_notrace_addr(unsigned long addr)
@ -974,7 +1025,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr)
return ret;
}
#else
static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)
static inline int ftrace_graph_addr(unsigned long *task_var, struct ftrace_graph_ent *trace)
{
return 1;
}
@ -983,27 +1034,37 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr)
{
return 0;
}
static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace)
static inline void ftrace_graph_addr_finish(struct fgraph_ops *gops, struct ftrace_graph_ret *trace)
{ }
#endif /* CONFIG_DYNAMIC_FTRACE */
extern unsigned int fgraph_max_depth;
static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace)
static inline bool
ftrace_graph_ignore_func(struct fgraph_ops *gops, struct ftrace_graph_ent *trace)
{
unsigned long *task_var = fgraph_get_task_var(gops);
/* trace it when it is-nested-in or is a function enabled. */
return !(trace_recursion_test(TRACE_GRAPH_BIT) ||
ftrace_graph_addr(trace)) ||
return !((*task_var & TRACE_GRAPH_FL) ||
ftrace_graph_addr(task_var, trace)) ||
(trace->depth < 0) ||
(fgraph_max_depth && trace->depth >= fgraph_max_depth);
}
void fgraph_init_ops(struct ftrace_ops *dst_ops,
struct ftrace_ops *src_ops);
#else /* CONFIG_FUNCTION_GRAPH_TRACER */
static inline enum print_line_t
print_graph_function_flags(struct trace_iterator *iter, u32 flags)
{
return TRACE_TYPE_UNHANDLED;
}
static inline void free_fgraph_ops(struct trace_array *tr) { }
/* ftrace_ops may not be defined */
#define init_array_fgraph_ops(tr, ops) do { } while (0)
#define allocate_fgraph_ops(tr, ops) ({ 0; })
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
extern struct list_head ftrace_pids;

View File

@ -80,6 +80,7 @@ void ftrace_free_ftrace_ops(struct trace_array *tr)
int ftrace_create_function_files(struct trace_array *tr,
struct dentry *parent)
{
int ret;
/*
* The top level array uses the "global_ops", and the files are
* created on boot up.
@ -90,6 +91,12 @@ int ftrace_create_function_files(struct trace_array *tr,
if (!tr->ops)
return -EINVAL;
ret = allocate_fgraph_ops(tr, tr->ops);
if (ret) {
kfree(tr->ops);
return ret;
}
ftrace_create_filter_files(tr->ops, parent);
return 0;
@ -99,6 +106,7 @@ void ftrace_destroy_function_files(struct trace_array *tr)
{
ftrace_destroy_filter_files(tr->ops);
ftrace_free_ftrace_ops(tr);
free_fgraph_ops(tr);
}
static ftrace_func_t select_trace_function(u32 flags_val)
@ -223,6 +231,7 @@ function_stack_trace_call(unsigned long ip, unsigned long parent_ip,
long disabled;
int cpu;
unsigned int trace_ctx;
int skip = STACK_SKIP;
if (unlikely(!tr->function_enabled))
return;
@ -239,7 +248,11 @@ function_stack_trace_call(unsigned long ip, unsigned long parent_ip,
if (likely(disabled == 1)) {
trace_ctx = tracing_gen_ctx_flags(flags);
trace_function(tr, ip, parent_ip, trace_ctx);
__trace_stack(tr, trace_ctx, STACK_SKIP);
#ifdef CONFIG_UNWINDER_FRAME_POINTER
if (ftrace_pids_enabled(op))
skip++;
#endif
__trace_stack(tr, trace_ctx, skip);
}
atomic_dec(&data->disabled);

View File

@ -83,8 +83,6 @@ static struct tracer_flags tracer_flags = {
.opts = trace_opts
};
static struct trace_array *graph_array;
/*
* DURATION column is being also used to display IRQ signs,
* following values are used by print_graph_irq and others
@ -129,9 +127,11 @@ static inline int ftrace_graph_ignore_irqs(void)
return in_hardirq();
}
int trace_graph_entry(struct ftrace_graph_ent *trace)
int trace_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
{
struct trace_array *tr = graph_array;
unsigned long *task_var = fgraph_get_task_var(gops);
struct trace_array *tr = gops->private;
struct trace_array_cpu *data;
unsigned long flags;
unsigned int trace_ctx;
@ -139,7 +139,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace)
int ret;
int cpu;
if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT))
if (*task_var & TRACE_GRAPH_NOTRACE)
return 0;
/*
@ -150,7 +150,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace)
* returning from the function.
*/
if (ftrace_graph_notrace_addr(trace->func)) {
trace_recursion_set(TRACE_GRAPH_NOTRACE_BIT);
*task_var |= TRACE_GRAPH_NOTRACE_BIT;
/*
* Need to return 1 to have the return called
* that will clear the NOTRACE bit.
@ -161,7 +161,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace)
if (!ftrace_trace_task(tr))
return 0;
if (ftrace_graph_ignore_func(trace))
if (ftrace_graph_ignore_func(gops, trace))
return 0;
if (ftrace_graph_ignore_irqs())
@ -238,19 +238,21 @@ void __trace_graph_return(struct trace_array *tr,
trace_buffer_unlock_commit_nostack(buffer, event);
}
void trace_graph_return(struct ftrace_graph_ret *trace)
void trace_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
{
struct trace_array *tr = graph_array;
unsigned long *task_var = fgraph_get_task_var(gops);
struct trace_array *tr = gops->private;
struct trace_array_cpu *data;
unsigned long flags;
unsigned int trace_ctx;
long disabled;
int cpu;
ftrace_graph_addr_finish(trace);
ftrace_graph_addr_finish(gops, trace);
if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) {
trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT);
if (*task_var & TRACE_GRAPH_NOTRACE) {
*task_var &= ~TRACE_GRAPH_NOTRACE;
return;
}
@ -266,18 +268,10 @@ void trace_graph_return(struct ftrace_graph_ret *trace)
local_irq_restore(flags);
}
void set_graph_array(struct trace_array *tr)
static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
{
graph_array = tr;
/* Make graph_array visible before we start tracing */
smp_mb();
}
static void trace_graph_thresh_return(struct ftrace_graph_ret *trace)
{
ftrace_graph_addr_finish(trace);
ftrace_graph_addr_finish(gops, trace);
if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) {
trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT);
@ -288,28 +282,60 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace)
(trace->rettime - trace->calltime < tracing_thresh))
return;
else
trace_graph_return(trace);
trace_graph_return(trace, gops);
}
static struct fgraph_ops funcgraph_thresh_ops = {
.entryfunc = &trace_graph_entry,
.retfunc = &trace_graph_thresh_return,
};
static struct fgraph_ops funcgraph_ops = {
.entryfunc = &trace_graph_entry,
.retfunc = &trace_graph_return,
};
int allocate_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops)
{
struct fgraph_ops *gops;
gops = kzalloc(sizeof(*gops), GFP_KERNEL);
if (!gops)
return -ENOMEM;
gops->entryfunc = &trace_graph_entry;
gops->retfunc = &trace_graph_return;
tr->gops = gops;
gops->private = tr;
fgraph_init_ops(&gops->ops, ops);
return 0;
}
void free_fgraph_ops(struct trace_array *tr)
{
kfree(tr->gops);
}
__init void init_array_fgraph_ops(struct trace_array *tr, struct ftrace_ops *ops)
{
tr->gops = &funcgraph_ops;
funcgraph_ops.private = tr;
fgraph_init_ops(&tr->gops->ops, ops);
}
static int graph_trace_init(struct trace_array *tr)
{
int ret;
set_graph_array(tr);
tr->gops->entryfunc = trace_graph_entry;
if (tracing_thresh)
ret = register_ftrace_graph(&funcgraph_thresh_ops);
tr->gops->retfunc = trace_graph_thresh_return;
else
ret = register_ftrace_graph(&funcgraph_ops);
tr->gops->retfunc = trace_graph_return;
/* Make gops functions are visible before we start tracing */
smp_mb();
ret = register_ftrace_graph(tr->gops);
if (ret)
return ret;
tracing_start_cmdline_record();
@ -320,10 +346,7 @@ static int graph_trace_init(struct trace_array *tr)
static void graph_trace_reset(struct trace_array *tr)
{
tracing_stop_cmdline_record();
if (tracing_thresh)
unregister_ftrace_graph(&funcgraph_thresh_ops);
else
unregister_ftrace_graph(&funcgraph_ops);
unregister_ftrace_graph(tr->gops);
}
static int graph_trace_update_thresh(struct trace_array *tr)
@ -1362,6 +1385,7 @@ static struct tracer graph_trace __tracer_data = {
.print_header = print_graph_headers,
.flags = &tracer_flags,
.set_flag = func_graph_set_flag,
.allow_instances = true,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_function_graph,
#endif

View File

@ -175,7 +175,8 @@ static int irqsoff_display_graph(struct trace_array *tr, int set)
return start_irqsoff_tracer(irqsoff_trace, set);
}
static int irqsoff_graph_entry(struct ftrace_graph_ent *trace)
static int irqsoff_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
{
struct trace_array *tr = irqsoff_trace;
struct trace_array_cpu *data;
@ -183,7 +184,7 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace)
unsigned int trace_ctx;
int ret;
if (ftrace_graph_ignore_func(trace))
if (ftrace_graph_ignore_func(gops, trace))
return 0;
/*
* Do not trace a function if it's filtered by set_graph_notrace.
@ -205,14 +206,15 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace)
return ret;
}
static void irqsoff_graph_return(struct ftrace_graph_ret *trace)
static void irqsoff_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
{
struct trace_array *tr = irqsoff_trace;
struct trace_array_cpu *data;
unsigned long flags;
unsigned int trace_ctx;
ftrace_graph_addr_finish(trace);
ftrace_graph_addr_finish(gops, trace);
if (!func_prolog_dec(tr, &data, &flags))
return;

View File

@ -112,14 +112,15 @@ static int wakeup_display_graph(struct trace_array *tr, int set)
return start_func_tracer(tr, set);
}
static int wakeup_graph_entry(struct ftrace_graph_ent *trace)
static int wakeup_graph_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
{
struct trace_array *tr = wakeup_trace;
struct trace_array_cpu *data;
unsigned int trace_ctx;
int ret = 0;
if (ftrace_graph_ignore_func(trace))
if (ftrace_graph_ignore_func(gops, trace))
return 0;
/*
* Do not trace a function if it's filtered by set_graph_notrace.
@ -141,13 +142,14 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace)
return ret;
}
static void wakeup_graph_return(struct ftrace_graph_ret *trace)
static void wakeup_graph_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
{
struct trace_array *tr = wakeup_trace;
struct trace_array_cpu *data;
unsigned int trace_ctx;
ftrace_graph_addr_finish(trace);
ftrace_graph_addr_finish(gops, trace);
if (!func_prolog_preempt_disable(tr, &data, &trace_ctx))
return;

View File

@ -756,13 +756,262 @@ trace_selftest_startup_function(struct tracer *trace, struct trace_array *tr)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
#ifdef CONFIG_DYNAMIC_FTRACE
#define CHAR_NUMBER 123
#define SHORT_NUMBER 12345
#define WORD_NUMBER 1234567890
#define LONG_NUMBER 1234567890123456789LL
#define ERRSTR_BUFLEN 128
struct fgraph_fixture {
struct fgraph_ops gops;
int store_size;
const char *store_type_name;
char error_str_buf[ERRSTR_BUFLEN];
char *error_str;
};
static __init int store_entry(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
{
struct fgraph_fixture *fixture = container_of(gops, struct fgraph_fixture, gops);
const char *type = fixture->store_type_name;
int size = fixture->store_size;
void *p;
p = fgraph_reserve_data(gops->idx, size);
if (!p) {
snprintf(fixture->error_str_buf, ERRSTR_BUFLEN,
"Failed to reserve %s\n", type);
return 0;
}
switch (size) {
case 1:
*(char *)p = CHAR_NUMBER;
break;
case 2:
*(short *)p = SHORT_NUMBER;
break;
case 4:
*(int *)p = WORD_NUMBER;
break;
case 8:
*(long long *)p = LONG_NUMBER;
break;
}
return 1;
}
static __init void store_return(struct ftrace_graph_ret *trace,
struct fgraph_ops *gops)
{
struct fgraph_fixture *fixture = container_of(gops, struct fgraph_fixture, gops);
const char *type = fixture->store_type_name;
long long expect = 0;
long long found = -1;
int size;
char *p;
p = fgraph_retrieve_data(gops->idx, &size);
if (!p) {
snprintf(fixture->error_str_buf, ERRSTR_BUFLEN,
"Failed to retrieve %s\n", type);
return;
}
if (fixture->store_size > size) {
snprintf(fixture->error_str_buf, ERRSTR_BUFLEN,
"Retrieved size %d is smaller than expected %d\n",
size, (int)fixture->store_size);
return;
}
switch (fixture->store_size) {
case 1:
expect = CHAR_NUMBER;
found = *(char *)p;
break;
case 2:
expect = SHORT_NUMBER;
found = *(short *)p;
break;
case 4:
expect = WORD_NUMBER;
found = *(int *)p;
break;
case 8:
expect = LONG_NUMBER;
found = *(long long *)p;
break;
}
if (found != expect) {
snprintf(fixture->error_str_buf, ERRSTR_BUFLEN,
"%s returned not %lld but %lld\n", type, expect, found);
return;
}
fixture->error_str = NULL;
}
static int __init init_fgraph_fixture(struct fgraph_fixture *fixture)
{
char *func_name;
int len;
snprintf(fixture->error_str_buf, ERRSTR_BUFLEN,
"Failed to execute storage %s\n", fixture->store_type_name);
fixture->error_str = fixture->error_str_buf;
func_name = "*" __stringify(DYN_FTRACE_TEST_NAME);
len = strlen(func_name);
return ftrace_set_filter(&fixture->gops.ops, func_name, len, 1);
}
/* Test fgraph storage for each size */
static int __init test_graph_storage_single(struct fgraph_fixture *fixture)
{
int size = fixture->store_size;
int ret;
pr_cont("PASSED\n");
pr_info("Testing fgraph storage of %d byte%s: ", size, str_plural(size));
ret = init_fgraph_fixture(fixture);
if (ret && ret != -ENODEV) {
pr_cont("*Could not set filter* ");
return -1;
}
ret = register_ftrace_graph(&fixture->gops);
if (ret) {
pr_warn("Failed to init store_bytes fgraph tracing\n");
return -1;
}
DYN_FTRACE_TEST_NAME();
unregister_ftrace_graph(&fixture->gops);
if (fixture->error_str) {
pr_cont("*** %s ***", fixture->error_str);
return -1;
}
return 0;
}
static struct fgraph_fixture store_bytes[4] __initdata = {
[0] = {
.gops = {
.entryfunc = store_entry,
.retfunc = store_return,
},
.store_size = 1,
.store_type_name = "byte",
},
[1] = {
.gops = {
.entryfunc = store_entry,
.retfunc = store_return,
},
.store_size = 2,
.store_type_name = "short",
},
[2] = {
.gops = {
.entryfunc = store_entry,
.retfunc = store_return,
},
.store_size = 4,
.store_type_name = "word",
},
[3] = {
.gops = {
.entryfunc = store_entry,
.retfunc = store_return,
},
.store_size = 8,
.store_type_name = "long long",
},
};
static __init int test_graph_storage_multi(void)
{
struct fgraph_fixture *fixture;
bool printed = false;
int i, ret;
pr_cont("PASSED\n");
pr_info("Testing multiple fgraph storage on a function: ");
for (i = 0; i < ARRAY_SIZE(store_bytes); i++) {
fixture = &store_bytes[i];
ret = init_fgraph_fixture(fixture);
if (ret && ret != -ENODEV) {
pr_cont("*Could not set filter* ");
printed = true;
goto out;
}
ret = register_ftrace_graph(&fixture->gops);
if (ret) {
pr_warn("Failed to init store_bytes fgraph tracing\n");
printed = true;
goto out;
}
}
DYN_FTRACE_TEST_NAME();
out:
while (--i >= 0) {
fixture = &store_bytes[i];
unregister_ftrace_graph(&fixture->gops);
if (fixture->error_str && !printed) {
pr_cont("*** %s ***", fixture->error_str);
printed = true;
}
}
return printed ? -1 : 0;
}
/* Test the storage passed across function_graph entry and return */
static __init int test_graph_storage(void)
{
int ret;
ret = test_graph_storage_single(&store_bytes[0]);
if (ret)
return ret;
ret = test_graph_storage_single(&store_bytes[1]);
if (ret)
return ret;
ret = test_graph_storage_single(&store_bytes[2]);
if (ret)
return ret;
ret = test_graph_storage_single(&store_bytes[3]);
if (ret)
return ret;
ret = test_graph_storage_multi();
if (ret)
return ret;
return 0;
}
#else
static inline int test_graph_storage(void) { return 0; }
#endif /* CONFIG_DYNAMIC_FTRACE */
/* Maximum number of functions to trace before diagnosing a hang */
#define GRAPH_MAX_FUNC_TEST 100000000
static unsigned int graph_hang_thresh;
/* Wrap the real function entry probe to avoid possible hanging */
static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace)
static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace,
struct fgraph_ops *gops)
{
/* This is harmlessly racy, we want to approximately detect a hang */
if (unlikely(++graph_hang_thresh > GRAPH_MAX_FUNC_TEST)) {
@ -776,7 +1025,7 @@ static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace)
return 0;
}
return trace_graph_entry(trace);
return trace_graph_entry(trace, gops);
}
static struct fgraph_ops fgraph_ops __initdata = {
@ -812,7 +1061,7 @@ trace_selftest_startup_function_graph(struct tracer *trace,
* to detect and recover from possible hangs
*/
tracing_reset_online_cpus(&tr->array_buffer);
set_graph_array(tr);
fgraph_ops.private = tr;
ret = register_ftrace_graph(&fgraph_ops);
if (ret) {
warn_failed_init_tracer(trace, ret);
@ -855,7 +1104,7 @@ trace_selftest_startup_function_graph(struct tracer *trace,
cond_resched();
tracing_reset_online_cpus(&tr->array_buffer);
set_graph_array(tr);
fgraph_ops.private = tr;
/*
* Some archs *cough*PowerPC*cough* add characters to the
@ -912,6 +1161,8 @@ trace_selftest_startup_function_graph(struct tracer *trace,
ftrace_set_global_filter(NULL, 0, 1);
#endif
ret = test_graph_storage();
/* Don't test dynamic tracing, the function tracer already did */
out:
/* Stop it if we failed */

View File

@ -0,0 +1,103 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: ftrace - function graph filters
# requires: set_ftrace_filter function_graph:tracer
# Make sure that function graph filtering works
INSTANCE1="instances/test1_$$"
INSTANCE2="instances/test2_$$"
INSTANCE3="instances/test3_$$"
WD=`pwd`
do_reset() {
cd $WD
if [ -d $INSTANCE1 ]; then
echo nop > $INSTANCE1/current_tracer
rmdir $INSTANCE1
fi
if [ -d $INSTANCE2 ]; then
echo nop > $INSTANCE2/current_tracer
rmdir $INSTANCE2
fi
if [ -d $INSTANCE3 ]; then
echo nop > $INSTANCE3/current_tracer
rmdir $INSTANCE3
fi
}
mkdir $INSTANCE1
if ! grep -q function_graph $INSTANCE1/available_tracers; then
echo "function_graph not allowed with instances"
rmdir $INSTANCE1
exit_unsupported
fi
mkdir $INSTANCE2
mkdir $INSTANCE3
fail() { # msg
do_reset
echo $1
exit_fail
}
disable_tracing
clear_trace
do_test() {
REGEX=$1
TEST=$2
# filter something, schedule is always good
if ! echo "$REGEX" > set_ftrace_filter; then
fail "can not enable filter $REGEX"
fi
echo > trace
echo function_graph > current_tracer
enable_tracing
sleep 1
# search for functions (has "{" or ";" on the line)
echo 0 > tracing_on
count=`cat trace | grep -v '^#' | grep -e '{' -e ';' | grep -v "$TEST" | wc -l`
echo 1 > tracing_on
if [ $count -ne 0 ]; then
fail "Graph filtering not working by itself against $TEST?"
fi
# Make sure we did find something
echo 0 > tracing_on
count=`cat trace | grep -v '^#' | grep -e '{' -e ';' | grep "$TEST" | wc -l`
echo 1 > tracing_on
if [ $count -eq 0 ]; then
fail "No traces found with $TEST?"
fi
}
do_test '*sched*' 'sched'
cd $INSTANCE1
do_test '*lock*' 'lock'
cd $WD
cd $INSTANCE2
do_test '*rcu*' 'rcu'
cd $WD
cd $INSTANCE3
echo function_graph > current_tracer
sleep 1
count=`cat trace | grep -v '^#' | grep -e '{' -e ';' | grep "$TEST" | wc -l`
if [ $count -eq 0 ]; then
fail "No traces found with all tracing?"
fi
cd $WD
echo nop > current_tracer
echo nop > $INSTANCE1/current_tracer
echo nop > $INSTANCE2/current_tracer
echo nop > $INSTANCE3/current_tracer
do_reset
exit 0

View File

@ -8,12 +8,18 @@
# Also test it on an instance directory
do_function_fork=1
do_funcgraph_proc=1
if [ ! -f options/function-fork ]; then
do_function_fork=0
echo "no option for function-fork found. Option will not be tested."
fi
if [ ! -f options/funcgraph-proc ]; then
do_funcgraph_proc=0
echo "no option for function-fork found. Option will not be tested."
fi
read PID _ < /proc/self/stat
if [ $do_function_fork -eq 1 ]; then
@ -21,12 +27,19 @@ if [ $do_function_fork -eq 1 ]; then
orig_value=`grep function-fork trace_options`
fi
if [ $do_funcgraph_proc -eq 1 ]; then
orig_value2=`cat options/funcgraph-proc`
echo 1 > options/funcgraph-proc
fi
do_reset() {
if [ $do_function_fork -eq 0 ]; then
return
if [ $do_function_fork -eq 1 ]; then
echo $orig_value > trace_options
fi
echo $orig_value > trace_options
if [ $do_funcgraph_proc -eq 1 ]; then
echo $orig_value2 > options/funcgraph-proc
fi
}
fail() { # msg
@ -36,13 +49,15 @@ fail() { # msg
}
do_test() {
TRACER=$1
disable_tracing
echo do_execve* > set_ftrace_filter
echo $FUNCTION_FORK >> set_ftrace_filter
echo $PID > set_ftrace_pid
echo function > current_tracer
echo $TRACER > current_tracer
if [ $do_function_fork -eq 1 ]; then
# don't allow children to be traced
@ -82,7 +97,11 @@ do_test() {
fi
}
do_test
do_test function
if grep -s function_graph available_tracers; then
do_test function_graph
fi
do_reset
exit 0