linux/kernel/trace
Linus Torvalds fa4b851b4a Tracing fixes for v6.8-rc7:
- Do not allow large strings (> 4096) as single write to trace_marker
 
   The size of a string written into trace_marker was determined by
   the size of the sub-buffer in the ring buffer. That size is
   dependent on the PAGE_SIZE of the architecture as it can be mapped
   into user space. But on PowerPC, where PAGE_SIZE is 64K, that made
   the limit of the string of writing into trace_marker 64K.
 
   One of the selftests looks at the size of the ring buffer sub-buffers
   and writes that plus more into the trace_marker. The write will take
   what it can and report back what it consumed so that the user space
   application (like echo) will write the rest of the string. The string
   is stored in the ring buffer and can be read via the "trace" or
   "trace_pipe" files.
 
   The reading of the ring buffer uses vsnprintf(), which uses a precision
   "%.*s" to make sure it only reads what is stored in the buffer, as
   a bug could cause the string to be non terminated.
 
   With the combination of the precision change and the PAGE_SIZE of 64K
   allowing huge strings to be added into the ring buffer, plus the test
   that would actually stress that limit, a bug was reported that
   the precision used was too big for "%.*s" as the string was close to
   64K in size and the max precision of vsnprintf is 32K.
 
   Linus suggested not to have that precision as it could hide a bug
   if the string was again stored without a nul byte.
 
   Another issue that was brought up is that the trace_seq buffer is
   also based on PAGE_SIZE even though it is not tied to the architecture
   limit like the ring buffer sub-buffer is. Having it be 64K * 2 is
   simply just too big and wasting memory on systems with 64K page sizes.
   It is now hardcoded to 8K which is what all other architectures with
   4K PAGE_SIZE has.
 
   Finally, the write to trace_marker is now limited to 4K as there is no
   reason to write larger strings into trace_marker.
 
 - ring_buffer_wait() should not loop.
   The ring_buffer_wait() does not have the full context (yet) on if it
   should loop or not. Just exit the loop as soon as its woken up and
   let the callers decide to loop or not (they already do, so it's a bit
   redundant).
 
 - Fix shortest_full field to be the smallest amount in the ring buffer that
   a waiter is waiting for. The "shortest_full" field is updated when a new
   waiter comes in and wants to wait for a smaller amount of data in the
   ring buffer than other waiters. But after all waiters are woken up, it's
   not reset, so if another waiter comes in wanting to wait for more data,
   it will be woken up when the ring buffer has a smaller amount from what
   the previous waiters were waiting for.
 
 - The wake up all waiters on close is incorrectly called frome .release()
   and not from .flush() so it will never wake up any waiters as the
   .release() will not get called until all .read() calls are finished. And the
   wakeup is for the waiters in those .read() calls.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCZe3j6xQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qmYOAQD6rPZ+ILqHmRQMZjsxaasBeVYidspY
 wj3fRGzwfiB6fgEAkIeA7FOrkOK0CuG8R+2AtQNF5ZjXdmfZdiYQD1/EjQU=
 =Hqlf
 -----END PGP SIGNATURE-----

Merge tag 'trace-ring-buffer-v6.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing fixes from Steven Rostedt:

 - Do not allow large strings (> 4096) as single write to trace_marker

   The size of a string written into trace_marker was determined by the
   size of the sub-buffer in the ring buffer. That size is dependent on
   the PAGE_SIZE of the architecture as it can be mapped into user
   space. But on PowerPC, where PAGE_SIZE is 64K, that made the limit of
   the string of writing into trace_marker 64K.

   One of the selftests looks at the size of the ring buffer sub-buffers
   and writes that plus more into the trace_marker. The write will take
   what it can and report back what it consumed so that the user space
   application (like echo) will write the rest of the string. The string
   is stored in the ring buffer and can be read via the "trace" or
   "trace_pipe" files.

   The reading of the ring buffer uses vsnprintf(), which uses a
   precision "%.*s" to make sure it only reads what is stored in the
   buffer, as a bug could cause the string to be non terminated.

   With the combination of the precision change and the PAGE_SIZE of 64K
   allowing huge strings to be added into the ring buffer, plus the test
   that would actually stress that limit, a bug was reported that the
   precision used was too big for "%.*s" as the string was close to 64K
   in size and the max precision of vsnprintf is 32K.

   Linus suggested not to have that precision as it could hide a bug if
   the string was again stored without a nul byte.

   Another issue that was brought up is that the trace_seq buffer is
   also based on PAGE_SIZE even though it is not tied to the
   architecture limit like the ring buffer sub-buffer is. Having it be
   64K * 2 is simply just too big and wasting memory on systems with 64K
   page sizes. It is now hardcoded to 8K which is what all other
   architectures with 4K PAGE_SIZE has.

   Finally, the write to trace_marker is now limited to 4K as there is
   no reason to write larger strings into trace_marker.

 - ring_buffer_wait() should not loop.

   The ring_buffer_wait() does not have the full context (yet) on if it
   should loop or not. Just exit the loop as soon as its woken up and
   let the callers decide to loop or not (they already do, so it's a bit
   redundant).

 - Fix shortest_full field to be the smallest amount in the ring buffer
   that a waiter is waiting for. The "shortest_full" field is updated
   when a new waiter comes in and wants to wait for a smaller amount of
   data in the ring buffer than other waiters. But after all waiters are
   woken up, it's not reset, so if another waiter comes in wanting to
   wait for more data, it will be woken up when the ring buffer has a
   smaller amount from what the previous waiters were waiting for.

 - The wake up all waiters on close is incorrectly called frome
   .release() and not from .flush() so it will never wake up any waiters
   as the .release() will not get called until all .read() calls are
   finished. And the wakeup is for the waiters in those .read() calls.

* tag 'trace-ring-buffer-v6.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  tracing: Use .flush() call to wake up readers
  ring-buffer: Fix resetting of shortest_full
  ring-buffer: Fix waking up ring buffer readers
  tracing: Limit trace_marker writes to just 4K
  tracing: Limit trace_seq size to just 8K and not depend on architecture PAGE_SIZE
  tracing: Remove precision vsnprintf() check from print event
2024-03-10 11:53:21 -07:00
..
rv tracing/tools: Updates for 6.4 2023-04-28 16:11:26 -07:00
blktrace.c
bpf_trace.c Revert BPF token-related functionality 2023-12-19 08:23:03 -08:00
bpf_trace.h
error_report-traces.c
fgraph.c tracing: arm64: Avoid missing-prototype warnings 2023-07-12 12:06:04 -04:00
fprobe.c fprobe: Fix to allocate entry_data_size buffer with rethook instances 2024-03-01 09:18:24 +09:00
ftrace_internal.h tracing: arm64: Avoid missing-prototype warnings 2023-07-12 12:06:04 -04:00
ftrace.c tracing: Fix HAVE_DYNAMIC_FTRACE_WITH_REGS ifdef 2024-02-13 11:56:49 -05:00
Kconfig Probes updates for v6.5: 2023-06-30 10:44:53 -07:00
kprobe_event_gen_test.c tracing: Fix wrong return in kprobe_event_gen_test.c 2023-03-19 12:20:48 -04:00
Makefile tracing/probes: Move finding func-proto API and getting func-param API to trace_btf 2023-08-23 09:39:45 +09:00
pid_list.c
pid_list.h
power-traces.c
preemptirq_delay_test.c
rethook.c rethook: Use __rcu pointer for rethook::handler 2023-12-01 14:53:56 +09:00
ring_buffer_benchmark.c ring-buffer: Read and write to ring buffers with custom sub buffer size 2023-12-20 07:54:56 -05:00
ring_buffer.c ring-buffer: Fix resetting of shortest_full 2024-03-10 12:27:40 -04:00
rpm-traces.c
synth_event_gen_test.c tracing / synthetic: Disable events after testing in synth_event_gen_test_init() 2023-12-21 10:04:45 -05:00
trace_benchmark.c
trace_benchmark.h
trace_boot.c tracing: Allow creating instances with specified system events 2023-12-18 23:14:16 -05:00
trace_branch.c
trace_btf.c tracing/probes: Fix to search structure fields correctly 2024-02-17 21:25:42 +09:00
trace_btf.h tracing/probes: Add a function to search a member of a struct/union 2023-08-23 09:40:16 +09:00
trace_clock.c
trace_dynevent.c
trace_dynevent.h
trace_entries.h tracing: Add back FORTIFY_SOURCE logic to kernel_stack event structure 2023-07-30 18:11:44 -04:00
trace_eprobe.c tracing/eprobe: drop unneeded breaks 2023-10-10 01:03:48 +09:00
trace_event_perf.c
trace_events_filter_test.h
trace_events_filter.c tracing: Have trace_event_file have ref counters 2023-11-01 23:44:44 -04:00
trace_events_hist.c tracing histograms: Simplify parse_actions() function 2024-01-08 13:24:56 -05:00
trace_events_inject.c tracing: Have event inject files inc the trace array ref count 2023-09-07 16:38:54 -04:00
trace_events_synth.c tracing/synthetic: Fix trace_string() return value 2024-02-15 11:40:01 -05:00
trace_events_trigger.c tracing/trigger: Fix to return error if failed to alloc snapshot 2024-01-26 15:10:24 -05:00
trace_events_user.c iov_iter: replace import_single_range() with import_ubuf() 2023-12-05 11:57:37 +01:00
trace_events.c ring-buffer: Page size per ring buffer 2023-12-20 07:54:55 -05:00
trace_export.c tracing: Add back FORTIFY_SOURCE logic to kernel_stack event structure 2023-07-30 18:11:44 -04:00
trace_fprobe.c tracing: fprobe-event: Fix to check tracepoint event and return 2023-11-10 20:06:12 +09:00
trace_functions_graph.c function_graph: Support recording and printing the return value of function 2023-06-20 18:38:37 -04:00
trace_functions.c
trace_hwlat.c tracing: Remove extra space at the end of hwlat_detector/mode 2023-09-01 21:00:00 -04:00
trace_irqsoff.c tracing: Fix memleak due to race between current_tracer and trace 2023-08-17 13:49:37 -04:00
trace_kdb.c
trace_kprobe_selftest.c tracing: arm64: Avoid missing-prototype warnings 2023-07-12 12:06:04 -04:00
trace_kprobe_selftest.h
trace_kprobe.c trace/kprobe: Display the actual notrace function when rejecting a probe 2023-12-17 16:28:54 +09:00
trace_mmiotrace.c
trace_nop.c
trace_osnoise.c tracing/timerlat: Move hrtimer_init to timerlat_fd open() 2024-02-01 11:50:13 -05:00
trace_output.c tracing: Remove precision vsnprintf() check from print event 2024-03-06 13:26:26 -05:00
trace_output.h tracing: Add "fields" option to show raw trace event fields 2023-03-29 06:52:08 -04:00
trace_preemptirq.c
trace_printk.c
trace_probe_kernel.h tracing/probes: Fix to record 0-length data_loc in fetch_store_string*() if fails 2023-07-14 17:04:58 +09:00
trace_probe_tmpl.h tracing/probes: Fix to record 0-length data_loc in fetch_store_string*() if fails 2023-07-14 17:04:58 +09:00
trace_probe.c tracing/probes: Fix to set arg size and fmt after setting type from BTF 2024-02-08 23:26:25 +09:00
trace_probe.h tracing/probes: Fix to show a parse error for bad type for $comm 2024-02-08 23:26:13 +09:00
trace_recursion_record.c
trace_sched_switch.c
trace_sched_wakeup.c tracing: Fix memleak due to race between current_tracer and trace 2023-08-17 13:49:37 -04:00
trace_selftest_dynamic.c
trace_selftest.c tracing: Have function_graph selftest call cond_resched() 2023-05-28 21:15:46 -04:00
trace_seq.c trace_seq: Increase the buffer size to almost two pages 2023-12-18 23:14:16 -05:00
trace_stack.c
trace_stat.c
trace_stat.h
trace_synth.h
trace_syscalls.c bpf: Change syscall_nr type to int in struct syscall_tp_t 2023-10-13 12:39:36 -07:00
trace_uprobe.c tracing/uprobe: Replace strlcpy() with strscpy() 2023-12-01 10:25:35 -08:00
trace.c tracing: Use .flush() call to wake up readers 2024-03-10 12:27:47 -04:00
trace.h ring-buffer: Page size per ring buffer 2023-12-20 07:54:55 -05:00
tracing_map.c tracing: Ensure visibility when inserting an element into tracing_map 2024-01-22 17:15:40 -05:00
tracing_map.h tracing: Remove unused extern declaration tracing_map_set_field_descr() 2023-07-23 11:08:14 -04:00