2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FTRACE_NMI_ENTER
15 See Documentation/trace/ftrace-design.txt
17 config HAVE_FUNCTION_TRACER
20 See Documentation/trace/ftrace-design.txt
22 config HAVE_FUNCTION_GRAPH_TRACER
25 See Documentation/trace/ftrace-design.txt
27 config HAVE_FUNCTION_GRAPH_FP_TEST
30 See Documentation/trace/ftrace-design.txt
32 config HAVE_DYNAMIC_FTRACE
35 See Documentation/trace/ftrace-design.txt
37 config HAVE_DYNAMIC_FTRACE_WITH_REGS
40 config HAVE_FTRACE_MCOUNT_RECORD
43 See Documentation/trace/ftrace-design.txt
45 config HAVE_SYSCALL_TRACEPOINTS
48 See Documentation/trace/ftrace-design.txt
53 Arch supports the gcc options -pg with -mfentry
55 config HAVE_C_RECORDMCOUNT
58 C version of recordmcount available?
60 config TRACER_MAX_TRACE
71 config FTRACE_NMI_ENTER
73 depends on HAVE_FTRACE_NMI_ENTER
77 select CONTEXT_SWITCH_TRACER
80 config CONTEXT_SWITCH_TRACER
83 config RING_BUFFER_ALLOW_SWAP
86 Allow the use of ring_buffer_swap_cpu.
87 Adds a very slight overhead to tracing when enabled.
89 # All tracer options should select GENERIC_TRACER. For those options that are
90 # enabled by all tracers (context switch and event tracer) they select TRACING.
91 # This allows those options to appear when no other tracer is selected. But the
92 # options do not appear when something else selects it. We need the two options
93 # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
94 # hiding of the automatic options.
100 select STACKTRACE if STACKTRACE_SUPPORT
107 config GENERIC_TRACER
112 # Minimum requirements an architecture has to meet for us to
113 # be able to offer generic tracing facilities:
115 config TRACING_SUPPORT
117 # PPC32 has no irqflags tracing support, but it can use most of the
118 # tracers anyway, they were tested to build and work. Note that new
119 # exceptions to this list aren't welcomed, better implement the
120 # irqflags tracing for your architecture.
121 depends on TRACE_IRQFLAGS_SUPPORT || PPC32
122 depends on STACKTRACE_SUPPORT
129 default y if DEBUG_KERNEL
131 Enable the kernel tracing infrastructure.
135 config FUNCTION_TRACER
136 bool "Kernel Function Tracer"
137 depends on HAVE_FUNCTION_TRACER
139 select GENERIC_TRACER
140 select CONTEXT_SWITCH_TRACER
142 Enable the kernel to trace every kernel function. This is done
143 by using a compiler feature to insert a small, 5-byte No-Operation
144 instruction at the beginning of every kernel function, which NOP
145 sequence is then dynamically patched into a tracer call when
146 tracing is enabled by the administrator. If it's runtime disabled
147 (the bootup default), then the overhead of the instructions is very
148 small and not measurable even in micro-benchmarks.
150 config FUNCTION_GRAPH_TRACER
151 bool "Kernel Function Graph Tracer"
152 depends on HAVE_FUNCTION_GRAPH_TRACER
153 depends on FUNCTION_TRACER
154 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE
157 Enable the kernel to trace a function at both its return
159 Its first purpose is to trace the duration of functions and
160 draw a call graph for each thread with some information like
161 the return value. This is done by setting the current return
162 address on the current task structure into a stack of calls.
165 config IRQSOFF_TRACER
166 bool "Interrupts-off Latency Tracer"
168 depends on TRACE_IRQFLAGS_SUPPORT
169 depends on !ARCH_USES_GETTIMEOFFSET
170 select TRACE_IRQFLAGS
171 select GENERIC_TRACER
172 select TRACER_MAX_TRACE
173 select RING_BUFFER_ALLOW_SWAP
174 select TRACER_SNAPSHOT
175 select TRACER_SNAPSHOT_PER_CPU_SWAP
177 This option measures the time spent in irqs-off critical
178 sections, with microsecond accuracy.
180 The default measurement method is a maximum search, which is
181 disabled by default and can be runtime (re-)started
184 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
186 (Note that kernel size and overhead increase with this option
187 enabled. This option and the preempt-off timing option can be
188 used together or separately.)
190 config INTERRUPT_OFF_HIST
191 bool "Interrupts-off Latency Histogram"
192 depends on IRQSOFF_TRACER
194 This option generates continuously updated histograms (one per cpu)
195 of the duration of time periods with interrupts disabled. The
196 histograms are disabled by default. To enable them, write a non-zero
199 /sys/kernel/debug/tracing/latency_hist/enable/preemptirqsoff
201 If PREEMPT_OFF_HIST is also selected, additional histograms (one
202 per cpu) are generated that accumulate the duration of time periods
203 when both interrupts and preemption are disabled. The histogram data
204 will be located in the debug file system at
206 /sys/kernel/debug/tracing/latency_hist/irqsoff
208 config PREEMPT_TRACER
209 bool "Preemption-off Latency Tracer"
211 depends on !ARCH_USES_GETTIMEOFFSET
213 select GENERIC_TRACER
214 select TRACER_MAX_TRACE
215 select RING_BUFFER_ALLOW_SWAP
216 select TRACER_SNAPSHOT
217 select TRACER_SNAPSHOT_PER_CPU_SWAP
219 This option measures the time spent in preemption-off critical
220 sections, with microsecond accuracy.
222 The default measurement method is a maximum search, which is
223 disabled by default and can be runtime (re-)started
226 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
228 (Note that kernel size and overhead increase with this option
229 enabled. This option and the irqs-off timing option can be
230 used together or separately.)
232 config PREEMPT_OFF_HIST
233 bool "Preemption-off Latency Histogram"
234 depends on PREEMPT_TRACER
236 This option generates continuously updated histograms (one per cpu)
237 of the duration of time periods with preemption disabled. The
238 histograms are disabled by default. To enable them, write a non-zero
241 /sys/kernel/debug/tracing/latency_hist/enable/preemptirqsoff
243 If INTERRUPT_OFF_HIST is also selected, additional histograms (one
244 per cpu) are generated that accumulate the duration of time periods
245 when both interrupts and preemption are disabled. The histogram data
246 will be located in the debug file system at
248 /sys/kernel/debug/tracing/latency_hist/preemptoff
251 bool "Scheduling Latency Tracer"
252 select GENERIC_TRACER
253 select CONTEXT_SWITCH_TRACER
254 select TRACER_MAX_TRACE
255 select TRACER_SNAPSHOT
257 This tracer tracks the latency of the highest priority task
258 to be scheduled in, starting from the point it has woken up.
260 config WAKEUP_LATENCY_HIST
261 bool "Scheduling Latency Histogram"
262 depends on SCHED_TRACER
264 This option generates continuously updated histograms (one per cpu)
265 of the scheduling latency of the highest priority task.
266 The histograms are disabled by default. To enable them, write a
269 /sys/kernel/debug/tracing/latency_hist/enable/wakeup
271 Two different algorithms are used, one to determine the latency of
272 processes that exclusively use the highest priority of the system and
273 another one to determine the latency of processes that share the
274 highest system priority with other processes. The former is used to
275 improve hardware and system software, the latter to optimize the
276 priority design of a given system. The histogram data will be
277 located in the debug file system at
279 /sys/kernel/debug/tracing/latency_hist/wakeup
283 /sys/kernel/debug/tracing/latency_hist/wakeup/sharedprio
285 If both Scheduling Latency Histogram and Missed Timer Offsets
286 Histogram are selected, additional histogram data will be collected
287 that contain, in addition to the wakeup latency, the timer latency, in
288 case the wakeup was triggered by an expired timer. These histograms
291 /sys/kernel/debug/tracing/latency_hist/timerandwakeup
293 directory. They reflect the apparent interrupt and scheduling latency
294 and are best suitable to determine the worst-case latency of a given
295 system. To enable these histograms, write a non-zero number to
297 /sys/kernel/debug/tracing/latency_hist/enable/timerandwakeup
299 config MISSED_TIMER_OFFSETS_HIST
300 depends on HIGH_RES_TIMERS
301 select GENERIC_TRACER
302 bool "Missed Timer Offsets Histogram"
304 Generate a histogram of missed timer offsets in microseconds. The
305 histograms are disabled by default. To enable them, write a non-zero
308 /sys/kernel/debug/tracing/latency_hist/enable/missed_timer_offsets
310 The histogram data will be located in the debug file system at
312 /sys/kernel/debug/tracing/latency_hist/missed_timer_offsets
314 If both Scheduling Latency Histogram and Missed Timer Offsets
315 Histogram are selected, additional histogram data will be collected
316 that contain, in addition to the wakeup latency, the timer latency, in
317 case the wakeup was triggered by an expired timer. These histograms
320 /sys/kernel/debug/tracing/latency_hist/timerandwakeup
322 directory. They reflect the apparent interrupt and scheduling latency
323 and are best suitable to determine the worst-case latency of a given
324 system. To enable these histograms, write a non-zero number to
326 /sys/kernel/debug/tracing/latency_hist/enable/timerandwakeup
328 config ENABLE_DEFAULT_TRACERS
329 bool "Trace process context switches and events"
330 depends on !GENERIC_TRACER
333 This tracer hooks to various trace points in the kernel,
334 allowing the user to pick and choose which trace point they
335 want to trace. It also includes the sched_switch tracer plugin.
337 config FTRACE_SYSCALLS
338 bool "Trace syscalls"
339 depends on HAVE_SYSCALL_TRACEPOINTS
340 select GENERIC_TRACER
343 Basic tracer to catch the syscall entry and exit events.
345 config TRACER_SNAPSHOT
346 bool "Create a snapshot trace buffer"
347 select TRACER_MAX_TRACE
349 Allow tracing users to take snapshot of the current buffer using the
350 ftrace interface, e.g.:
352 echo 1 > /sys/kernel/debug/tracing/snapshot
355 config TRACER_SNAPSHOT_PER_CPU_SWAP
356 bool "Allow snapshot to swap per CPU"
357 depends on TRACER_SNAPSHOT
358 select RING_BUFFER_ALLOW_SWAP
360 Allow doing a snapshot of a single CPU buffer instead of a
361 full swap (all buffers). If this is set, then the following is
364 echo 1 > /sys/kernel/debug/tracing/per_cpu/cpu2/snapshot
366 After which, only the tracing buffer for CPU 2 was swapped with
367 the main tracing buffer, and the other CPU buffers remain the same.
369 When this is enabled, this adds a little more overhead to the
370 trace recording, as it needs to add some checks to synchronize
371 recording with swaps. But this does not affect the performance
372 of the overall system. This is enabled by default when the preempt
373 or irq latency tracers are enabled, as those need to swap as well
374 and already adds the overhead (plus a lot more).
376 config TRACE_BRANCH_PROFILING
378 select GENERIC_TRACER
381 prompt "Branch Profiling"
382 default BRANCH_PROFILE_NONE
384 The branch profiling is a software profiler. It will add hooks
385 into the C conditionals to test which path a branch takes.
387 The likely/unlikely profiler only looks at the conditions that
388 are annotated with a likely or unlikely macro.
390 The "all branch" profiler will profile every if-statement in the
391 kernel. This profiler will also enable the likely/unlikely
394 Either of the above profilers adds a bit of overhead to the system.
395 If unsure, choose "No branch profiling".
397 config BRANCH_PROFILE_NONE
398 bool "No branch profiling"
400 No branch profiling. Branch profiling adds a bit of overhead.
401 Only enable it if you want to analyse the branching behavior.
402 Otherwise keep it disabled.
404 config PROFILE_ANNOTATED_BRANCHES
405 bool "Trace likely/unlikely profiler"
406 select TRACE_BRANCH_PROFILING
408 This tracer profiles all likely and unlikely macros
409 in the kernel. It will display the results in:
411 /sys/kernel/debug/tracing/trace_stat/branch_annotated
413 Note: this will add a significant overhead; only turn this
414 on if you need to profile the system's use of these macros.
416 config PROFILE_ALL_BRANCHES
417 bool "Profile all if conditionals"
418 select TRACE_BRANCH_PROFILING
420 This tracer profiles all branch conditions. Every if ()
421 taken in the kernel is recorded whether it hit or miss.
422 The results will be displayed in:
424 /sys/kernel/debug/tracing/trace_stat/branch_all
426 This option also enables the likely/unlikely profiler.
428 This configuration, when enabled, will impose a great overhead
429 on the system. This should only be enabled when the system
430 is to be analyzed in much detail.
433 config TRACING_BRANCHES
436 Selected by tracers that will trace the likely and unlikely
437 conditions. This prevents the tracers themselves from being
438 profiled. Profiling the tracing infrastructure can only happen
439 when the likelys and unlikelys are not being traced.
442 bool "Trace likely/unlikely instances"
443 depends on TRACE_BRANCH_PROFILING
444 select TRACING_BRANCHES
446 This traces the events of likely and unlikely condition
447 calls in the kernel. The difference between this and the
448 "Trace likely/unlikely profiler" is that this is not a
449 histogram of the callers, but actually places the calling
450 events into a running trace buffer to see when and where the
451 events happened, as well as their results.
456 bool "Trace max stack"
457 depends on HAVE_FUNCTION_TRACER
458 select FUNCTION_TRACER
462 This special tracer records the maximum stack footprint of the
463 kernel and displays it in /sys/kernel/debug/tracing/stack_trace.
465 This tracer works by hooking into every function call that the
466 kernel executes, and keeping a maximum stack depth value and
467 stack-trace saved. If this is configured with DYNAMIC_FTRACE
468 then it will not have any overhead while the stack tracer
471 To enable the stack tracer on bootup, pass in 'stacktrace'
472 on the kernel command line.
474 The stack tracer can also be enabled or disabled via the
475 sysctl kernel.stack_tracer_enabled
479 config BLK_DEV_IO_TRACE
480 bool "Support for tracing block IO actions"
486 select GENERIC_TRACER
489 Say Y here if you want to be able to trace the block layer actions
490 on a given queue. Tracing allows you to see any traffic happening
491 on a block device queue. For more information (and the userspace
492 support tools needed), fetch the blktrace tools from:
494 git://git.kernel.dk/blktrace.git
496 Tracing also is possible using the ftrace interface, e.g.:
498 echo 1 > /sys/block/sda/sda1/trace/enable
499 echo blk > /sys/kernel/debug/tracing/current_tracer
500 cat /sys/kernel/debug/tracing/trace_pipe
506 depends on HAVE_REGS_AND_STACK_ACCESS_API
507 bool "Enable kprobes-based dynamic events"
512 This allows the user to add tracing events (similar to tracepoints)
513 on the fly via the ftrace interface. See
514 Documentation/trace/kprobetrace.txt for more details.
516 Those events can be inserted wherever kprobes can probe, and record
517 various register and memory values.
519 This option is also required by perf-probe subcommand of perf tools.
520 If you want to use perf tools, this option is strongly recommended.
523 bool "Enable uprobes-based dynamic events"
524 depends on ARCH_SUPPORTS_UPROBES
526 depends on PERF_EVENTS
532 This allows the user to add tracing events on top of userspace
533 dynamic events (similar to tracepoints) on the fly via the trace
534 events interface. Those events can be inserted wherever uprobes
535 can probe, and record various registers.
536 This option is required if you plan to use perf-probe subcommand
537 of perf tools on user space applications.
540 depends on BPF_SYSCALL
541 depends on (KPROBE_EVENT || UPROBE_EVENT) && PERF_EVENTS
545 This allows the user to attach BPF programs to kprobe events.
550 config DYNAMIC_FTRACE
551 bool "enable/disable function tracing dynamically"
552 depends on FUNCTION_TRACER
553 depends on HAVE_DYNAMIC_FTRACE
556 This option will modify all the calls to function tracing
557 dynamically (will patch them out of the binary image and
558 replace them with a No-Op instruction) on boot up. During
559 compile time, a table is made of all the locations that ftrace
560 can function trace, and this table is linked into the kernel
561 image. When this is enabled, functions can be individually
562 enabled, and the functions not enabled will not affect
563 performance of the system.
565 See the files in /sys/kernel/debug/tracing:
566 available_filter_functions
570 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but
571 otherwise has native performance as long as no tracing is active.
573 config DYNAMIC_FTRACE_WITH_REGS
575 depends on DYNAMIC_FTRACE
576 depends on HAVE_DYNAMIC_FTRACE_WITH_REGS
578 config FUNCTION_PROFILER
579 bool "Kernel function profiler"
580 depends on FUNCTION_TRACER
583 This option enables the kernel function profiler. A file is created
584 in debugfs called function_profile_enabled which defaults to zero.
585 When a 1 is echoed into this file profiling begins, and when a
586 zero is entered, profiling stops. A "functions" file is created in
587 the trace_stats directory; this file shows the list of functions that
588 have been hit and their counters.
592 config FTRACE_MCOUNT_RECORD
594 depends on DYNAMIC_FTRACE
595 depends on HAVE_FTRACE_MCOUNT_RECORD
597 config FTRACE_SELFTEST
600 config FTRACE_STARTUP_TEST
601 bool "Perform a startup test on ftrace"
602 depends on GENERIC_TRACER
603 select FTRACE_SELFTEST
605 This option performs a series of startup tests on ftrace. On bootup
606 a series of tests are made to verify that the tracer is
607 functioning properly. It will do tests on all the configured
610 config EVENT_TRACE_TEST_SYSCALLS
611 bool "Run selftest on syscall events"
612 depends on FTRACE_STARTUP_TEST
614 This option will also enable testing every syscall event.
615 It only enables the event and disables it and runs various loads
616 with the event enabled. This adds a bit more time for kernel boot
617 up since it runs this on every system call defined.
619 TBD - enable a way to actually call the syscalls as we test their
623 bool "Memory mapped IO tracing"
624 depends on HAVE_MMIOTRACE_SUPPORT && PCI
625 select GENERIC_TRACER
627 Mmiotrace traces Memory Mapped I/O access and is meant for
628 debugging and reverse engineering. It is called from the ioremap
629 implementation and works via page faults. Tracing is disabled by
630 default and can be enabled at run-time.
632 See Documentation/trace/mmiotrace.txt.
633 If you are not helping to develop drivers, say N.
635 config MMIOTRACE_TEST
636 tristate "Test module for mmiotrace"
637 depends on MMIOTRACE && m
639 This is a dumb module for testing mmiotrace. It is very dangerous
640 as it will write garbage to IO memory starting at a given address.
641 However, it should be safe to use on e.g. unused portion of VRAM.
643 Say N, unless you absolutely know what you are doing.
645 config TRACEPOINT_BENCHMARK
646 bool "Add tracepoint that benchmarks tracepoints"
648 This option creates the tracepoint "benchmark:benchmark_event".
649 When the tracepoint is enabled, it kicks off a kernel thread that
650 goes into an infinite loop (calling cond_sched() to let other tasks
651 run), and calls the tracepoint. Each iteration will record the time
652 it took to write to the tracepoint and the next iteration that
653 data will be passed to the tracepoint itself. That is, the tracepoint
654 will report the time it took to do the previous tracepoint.
655 The string written to the tracepoint is a static string of 128 bytes
656 to keep the time the same. The initial string is simply a write of
657 "START". The second string records the cold cache time of the first
658 write which is not added to the rest of the calculations.
660 As it is a tight loop, it benchmarks as hot cache. That's fine because
661 we care most about hot paths that are probably in cache already.
663 An example of the output:
666 first=3672 [COLD CACHED]
667 last=632 first=3672 max=632 min=632 avg=316 std=446 std^2=199712
668 last=278 first=3672 max=632 min=278 avg=303 std=316 std^2=100337
669 last=277 first=3672 max=632 min=277 avg=296 std=258 std^2=67064
670 last=273 first=3672 max=632 min=273 avg=292 std=224 std^2=50411
671 last=273 first=3672 max=632 min=273 avg=288 std=200 std^2=40389
672 last=281 first=3672 max=632 min=273 avg=287 std=183 std^2=33666
675 config RING_BUFFER_BENCHMARK
676 tristate "Ring buffer benchmark stress tester"
677 depends on RING_BUFFER
679 This option creates a test to stress the ring buffer and benchmark it.
680 It creates its own ring buffer such that it will not interfere with
681 any other users of the ring buffer (such as ftrace). It then creates
682 a producer and consumer that will run for 10 seconds and sleep for
683 10 seconds. Each interval it will print out the number of events
684 it recorded and give a rough estimate of how long each iteration took.
686 It does not disable interrupts or raise its priority, so it may be
687 affected by processes that are running.
691 config RING_BUFFER_STARTUP_TEST
692 bool "Ring buffer startup self test"
693 depends on RING_BUFFER
695 Run a simple self test on the ring buffer on boot up. Late in the
696 kernel boot sequence, the test will start that kicks off
697 a thread per cpu. Each thread will write various size events
698 into the ring buffer. Another thread is created to send IPIs
699 to each of the threads, where the IPI handler will also write
700 to the ring buffer, to test/stress the nesting ability.
701 If any anomalies are discovered, a warning will be displayed
702 and all ring buffers will be disabled.
704 The test runs for 10 seconds. This will slow your boot time
705 by at least 10 more seconds.
707 At the end of the test, statics and more checks are done.
708 It will output the stats of each per cpu buffer. What
709 was written, the sizes, what was read, what was lost, and
710 other similar details.
714 config TRACE_ENUM_MAP_FILE
715 bool "Show enum mappings for trace events"
718 The "print fmt" of the trace events will show the enum names instead
719 of their values. This can cause problems for user space tools that
720 use this string to parse the raw data as user space does not know
721 how to convert the string to its value.
723 To fix this, there's a special macro in the kernel that can be used
724 to convert the enum into its value. If this macro is used, then the
725 print fmt strings will have the enums converted to their values.
727 If something does not get converted properly, this option can be
728 used to show what enums the kernel tried to convert.
730 This option is for debugging the enum conversions. A file is created
731 in the tracing directory called "enum_map" that will show the enum
732 names matched with their values and what trace event system they
735 Normally, the mapping of the strings to values will be freed after
736 boot up or module load. With this option, they will not be freed, as
737 they are needed for the "enum_map" file. Enabling this option will
738 increase the memory footprint of the running kernel.
742 config TRACING_EVENTS_GPIO
743 bool "Trace gpio events"
747 Enable tracing events for gpio subsystem
751 endif # TRACING_SUPPORT