2 # General architecture dependent options
9 tristate "OProfile system profiling"
11 depends on HAVE_OPROFILE
12 depends on !PREEMPT_RT_FULL
14 select RING_BUFFER_ALLOW_SWAP
16 OProfile is a profiling system capable of profiling the
17 whole system, include the kernel, kernel modules, libraries,
22 config OPROFILE_EVENT_MULTIPLEX
23 bool "OProfile multiplexing support (EXPERIMENTAL)"
25 depends on OPROFILE && X86
27 The number of hardware counters is limited. The multiplexing
28 feature enables OProfile to gather more events than counters
29 are provided by the hardware. This is realized by switching
30 between events at an user specified time interval.
37 config OPROFILE_NMI_TIMER
39 depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !PPC64
44 depends on HAVE_KPROBES
47 Kprobes allows you to trap at almost any kernel address and
48 execute a callback function. register_kprobe() establishes
49 a probepoint and specifies the callback. Kprobes is useful
50 for kernel debugging, non-intrusive instrumentation and testing.
54 bool "Optimize very unlikely/likely branches"
55 depends on HAVE_ARCH_JUMP_LABEL
56 depends on (!INTERRUPT_OFF_HIST && !PREEMPT_OFF_HIST && !WAKEUP_LATENCY_HIST && !MISSED_TIMER_OFFSETS_HIST)
58 This option enables a transparent branch optimization that
59 makes certain almost-always-true or almost-always-false branch
60 conditions even cheaper to execute within the kernel.
62 Certain performance-sensitive kernel code, such as trace points,
63 scheduler functionality, networking code and KVM have such
64 branches and include support for this optimization technique.
66 If it is detected that the compiler has support for "asm goto",
67 the kernel will compile such branches with just a nop
68 instruction. When the condition flag is toggled to true, the
69 nop will be converted to a jump instruction to execute the
70 conditional block of instructions.
72 This technique lowers overhead and stress on the branch prediction
73 of the processor and generally makes the kernel faster. The update
74 of the condition is slower, but those are always very rare.
76 ( On 32-bit x86, the necessary options added to the compiler
77 flags may increase the size of the kernel slightly. )
79 config STATIC_KEYS_SELFTEST
80 bool "Static key selftest"
83 Boot time self-test of the branch patching code.
87 depends on KPROBES && HAVE_OPTPROBES
90 config KPROBES_ON_FTRACE
92 depends on KPROBES && HAVE_KPROBES_ON_FTRACE
93 depends on DYNAMIC_FTRACE_WITH_REGS
95 If function tracer is enabled and the arch supports full
96 passing of pt_regs to function tracing, then kprobes can
97 optimize on top of function tracing.
102 Uprobes is the user-space counterpart to kprobes: they
103 enable instrumentation applications (such as 'perf probe')
104 to establish unintrusive probes in user-space binaries and
105 libraries, by executing handler functions when the probes
106 are hit by user-space applications.
108 ( These probes come in the form of single-byte breakpoints,
109 managed by the kernel and kept transparent to the probed
112 config HAVE_64BIT_ALIGNED_ACCESS
113 def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS
115 Some architectures require 64 bit accesses to be 64 bit
116 aligned, which also requires structs containing 64 bit values
117 to be 64 bit aligned too. This includes some 32 bit
118 architectures which can do 64 bit accesses, as well as 64 bit
119 architectures without unaligned access.
121 This symbol should be selected by an architecture if 64 bit
122 accesses are required to be 64 bit aligned in this way even
123 though it is not a 64 bit architecture.
125 See Documentation/unaligned-memory-access.txt for more
126 information on the topic of unaligned memory accesses.
128 config HAVE_EFFICIENT_UNALIGNED_ACCESS
131 Some architectures are unable to perform unaligned accesses
132 without the use of get_unaligned/put_unaligned. Others are
133 unable to perform such accesses efficiently (e.g. trap on
134 unaligned access and require fixing it up in the exception
137 This symbol should be selected by an architecture if it can
138 perform unaligned accesses efficiently to allow different
139 code paths to be selected for these cases. Some network
140 drivers, for example, could opt to not fix up alignment
141 problems with received packets if doing so would not help
144 See Documentation/unaligned-memory-access.txt for more
145 information on the topic of unaligned memory accesses.
147 config ARCH_USE_BUILTIN_BSWAP
150 Modern versions of GCC (since 4.4) have builtin functions
151 for handling byte-swapping. Using these, instead of the old
152 inline assembler that the architecture code provides in the
153 __arch_bswapXX() macros, allows the compiler to see what's
154 happening and offers more opportunity for optimisation. In
155 particular, the compiler will be able to combine the byteswap
156 with a nearby load or store and use load-and-swap or
157 store-and-swap instructions if the architecture has them. It
158 should almost *never* result in code which is worse than the
159 hand-coded assembler in <asm/swab.h>. But just in case it
160 does, the use of the builtins is optional.
162 Any architecture with load-and-swap or store-and-swap
163 instructions should set this. And it shouldn't hurt to set it
164 on architectures that don't have such instructions.
168 depends on KPROBES && HAVE_KRETPROBES
170 config USER_RETURN_NOTIFIER
172 depends on HAVE_USER_RETURN_NOTIFIER
174 Provide a kernel-internal notification when a cpu is about to
177 config HAVE_IOREMAP_PROT
183 config HAVE_KRETPROBES
186 config HAVE_OPTPROBES
189 config HAVE_KPROBES_ON_FTRACE
192 config HAVE_NMI_WATCHDOG
195 # An arch should select this if it provides all these things:
197 # task_pt_regs() in asm/processor.h or asm/ptrace.h
198 # arch_has_single_step() if there is hardware single-step support
199 # arch_has_block_step() if there is hardware block-step support
200 # asm/syscall.h supplying asm-generic/syscall.h interface
201 # linux/regset.h user_regset interfaces
202 # CORE_DUMP_USE_REGSET #define'd in linux/elf.h
203 # TIF_SYSCALL_TRACE calls tracehook_report_syscall_{entry,exit}
204 # TIF_NOTIFY_RESUME calls tracehook_notify_resume()
205 # signal delivery calls tracehook_signal_handler()
207 config HAVE_ARCH_TRACEHOOK
210 config HAVE_DMA_ATTRS
213 config HAVE_DMA_CONTIGUOUS
216 config GENERIC_SMP_IDLE_THREAD
219 config GENERIC_IDLE_POLL_SETUP
222 # Select if arch init_task initializer is different to init/init_task.c
223 config ARCH_INIT_TASK
226 # Select if arch has its private alloc_task_struct() function
227 config ARCH_TASK_STRUCT_ALLOCATOR
230 # Select if arch has its private alloc_thread_info() function
231 config ARCH_THREAD_INFO_ALLOCATOR
234 # Select if arch wants to size task_struct dynamically via arch_task_struct_size:
235 config ARCH_WANTS_DYNAMIC_TASK_STRUCT
238 config HAVE_REGS_AND_STACK_ACCESS_API
241 This symbol should be selected by an architecure if it supports
242 the API needed to access registers and stack entries from pt_regs,
243 declared in asm/ptrace.h
244 For example the kprobes-based event tracer needs this API.
249 The <linux/clk.h> calls support software clock gating and
250 thus are a key power management tool on many systems.
252 config HAVE_DMA_API_DEBUG
255 config HAVE_HW_BREAKPOINT
257 depends on PERF_EVENTS
259 config HAVE_MIXED_BREAKPOINTS_REGS
261 depends on HAVE_HW_BREAKPOINT
263 Depending on the arch implementation of hardware breakpoints,
264 some of them have separate registers for data and instruction
265 breakpoints addresses, others have mixed registers to store
266 them but define the access type in a control register.
267 Select this option if your arch implements breakpoints under the
270 config HAVE_USER_RETURN_NOTIFIER
273 config HAVE_PERF_EVENTS_NMI
276 System hardware can generate an NMI using the perf event
277 subsystem. Also has support for calculating CPU cycle events
278 to determine how many clock cycles in a given period.
280 config HAVE_PERF_REGS
283 Support selective register dumps for perf events. This includes
284 bit-mapping of each registers and a unique architecture id.
286 config HAVE_PERF_USER_STACK_DUMP
289 Support user stack dumps for perf event samples. This needs
290 access to the user stack pointer which is not unified across
293 config HAVE_ARCH_JUMP_LABEL
296 config HAVE_RCU_TABLE_FREE
299 config ARCH_HAVE_NMI_SAFE_CMPXCHG
302 config HAVE_ALIGNED_STRUCT_PAGE
305 This makes sure that struct pages are double word aligned and that
306 e.g. the SLUB allocator can perform double word atomic operations
307 on a struct page for better performance. However selecting this
308 might increase the size of a struct page by a word.
310 config HAVE_CMPXCHG_LOCAL
313 config HAVE_CMPXCHG_DOUBLE
316 config ARCH_WANT_IPC_PARSE_VERSION
319 config ARCH_WANT_COMPAT_IPC_PARSE_VERSION
322 config ARCH_WANT_OLD_COMPAT_IPC
323 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
326 config HAVE_ARCH_SECCOMP_FILTER
329 An arch should select this symbol if it provides all of these things:
331 - syscall_get_arguments()
333 - syscall_set_return_value()
334 - SIGSYS siginfo_t support
335 - secure_computing is called from a ptrace_event()-safe context
336 - secure_computing return value is checked and a return value of -1
337 results in the system call being skipped immediately.
338 - seccomp syscall wired up
340 For best performance, an arch should use seccomp_phase1 and
341 seccomp_phase2 directly. It should call seccomp_phase1 for all
342 syscalls if TIF_SECCOMP is set, but seccomp_phase1 does not
343 need to be called from a ptrace-safe context. It must then
344 call seccomp_phase2 if seccomp_phase1 returns anything other
345 than SECCOMP_PHASE1_OK or SECCOMP_PHASE1_SKIP.
347 As an additional optimization, an arch may provide seccomp_data
348 directly to seccomp_phase1; this avoids multiple calls
349 to the syscall_xyz helpers for every syscall.
351 config SECCOMP_FILTER
353 depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
355 Enable tasks to build secure computing environments defined
356 in terms of Berkeley Packet Filter programs which implement
357 task-defined system call filtering polices.
359 See Documentation/prctl/seccomp_filter.txt for details.
361 config HAVE_CC_STACKPROTECTOR
364 An arch should select this symbol if:
365 - its compiler supports the -fstack-protector option
366 - it has implemented a stack canary (e.g. __stack_chk_guard)
368 config CC_STACKPROTECTOR
371 Set when a stack-protector mode is enabled, so that the build
372 can enable kernel-side support for the GCC feature.
375 prompt "Stack Protector buffer overflow detection"
376 depends on HAVE_CC_STACKPROTECTOR
377 default CC_STACKPROTECTOR_NONE
379 This option turns on the "stack-protector" GCC feature. This
380 feature puts, at the beginning of functions, a canary value on
381 the stack just before the return address, and validates
382 the value just before actually returning. Stack based buffer
383 overflows (that need to overwrite this return address) now also
384 overwrite the canary, which gets detected and the attack is then
385 neutralized via a kernel panic.
387 config CC_STACKPROTECTOR_NONE
390 Disable "stack-protector" GCC feature.
392 config CC_STACKPROTECTOR_REGULAR
394 select CC_STACKPROTECTOR
396 Functions will have the stack-protector canary logic added if they
397 have an 8-byte or larger character array on the stack.
399 This feature requires gcc version 4.2 or above, or a distribution
400 gcc with the feature backported ("-fstack-protector").
402 On an x86 "defconfig" build, this feature adds canary checks to
403 about 3% of all kernel functions, which increases kernel code size
406 config CC_STACKPROTECTOR_STRONG
408 select CC_STACKPROTECTOR
410 Functions will have the stack-protector canary logic added in any
411 of the following conditions:
413 - local variable's address used as part of the right hand side of an
414 assignment or function argument
415 - local variable is an array (or union containing an array),
416 regardless of array type or length
417 - uses register local variables
419 This feature requires gcc version 4.9 or above, or a distribution
420 gcc with the feature backported ("-fstack-protector-strong").
422 On an x86 "defconfig" build, this feature adds canary checks to
423 about 20% of all kernel functions, which increases the kernel code
428 config HAVE_CONTEXT_TRACKING
431 Provide kernel/user boundaries probes necessary for subsystems
432 that need it, such as userspace RCU extended quiescent state.
433 Syscalls need to be wrapped inside user_exit()-user_enter() through
434 the slow path using TIF_NOHZ flag. Exceptions handlers must be
435 wrapped as well. Irqs are already protected inside
436 rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on
437 irq exit still need to be protected.
439 config HAVE_VIRT_CPU_ACCOUNTING
442 config HAVE_VIRT_CPU_ACCOUNTING_GEN
446 With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit.
447 Before enabling this option, arch code must be audited
448 to ensure there are no races in concurrent read/write of
449 cputime_t. For example, reading/writing 64-bit cputime_t on
450 some 32-bit arches may require multiple accesses, so proper
451 locking is needed to protect against concurrent accesses.
454 config HAVE_IRQ_TIME_ACCOUNTING
457 Archs need to ensure they use a high enough resolution clock to
458 support irq time accounting and then call enable_sched_clock_irqtime().
460 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
463 config HAVE_ARCH_HUGE_VMAP
466 config HAVE_ARCH_SOFT_DIRTY
469 config HAVE_MOD_ARCH_SPECIFIC
472 The arch uses struct mod_arch_specific to store data. Many arches
473 just need a simple module loader without arch specific data - those
474 should not enable this.
476 config MODULES_USE_ELF_RELA
479 Modules only use ELF RELA relocations. Modules with ELF REL
480 relocations will give an error.
482 config MODULES_USE_ELF_REL
485 Modules only use ELF REL relocations. Modules with ELF RELA
486 relocations will give an error.
488 config HAVE_UNDERSCORE_SYMBOL_PREFIX
491 Some architectures generate an _ in front of C symbols; things like
492 module loading and assembly files need to know about this.
494 config HAVE_IRQ_EXIT_ON_IRQ_STACK
497 Architecture doesn't only execute the irq handler on the irq stack
498 but also irq_exit(). This way we can process softirqs on this irq
499 stack instead of switching to a new one when we call __do_softirq()
500 in the end of an hardirq.
501 This spares a stack switch and improves cache usage on softirq
504 config PGTABLE_LEVELS
508 config ARCH_HAS_ELF_RANDOMIZE
511 An architecture supports choosing randomized locations for
512 stack, mmap, brk, and ET_DYN. Defined functions:
514 - arch_randomize_brk()
516 config HAVE_COPY_THREAD_TLS
519 Architecture provides copy_thread_tls to accept tls argument via
520 normal C parameter passing, rather than extracting the syscall
521 argument from pt_regs.
526 config CLONE_BACKWARDS
529 Architecture has tls passed as the 4th argument of clone(2),
532 config CLONE_BACKWARDS2
535 Architecture has the first two arguments of clone(2) swapped.
537 config CLONE_BACKWARDS3
540 Architecture has tls passed as the 3rd argument of clone(2),
543 config ODD_RT_SIGACTION
546 Architecture has unusual rt_sigaction(2) arguments
548 config OLD_SIGSUSPEND
551 Architecture has old sigsuspend(2) syscall, of one-argument variety
553 config OLD_SIGSUSPEND3
556 Even weirder antique ABI - three-argument sigsuspend(2)
561 Architecture has old sigaction(2) syscall. Nope, not the same
562 as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2),
563 but fairly different variant of sigaction(2), thanks to OSF/1
566 config COMPAT_OLD_SIGACTION
569 source "kernel/gcov/Kconfig"