2 * Only give sleepers 50% of their service deficit. This allows
3 * them to run sooner, but does not allow tons of sleepers to
4 * rip the spread apart.
6 SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true)
9 * Place new tasks ahead so that they do not starve already running
12 SCHED_FEAT(START_DEBIT, true)
15 * Prefer to schedule the task we woke last (assuming it failed
16 * wakeup-preemption), since its likely going to consume data we
17 * touched, increases cache locality.
19 SCHED_FEAT(NEXT_BUDDY, false)
22 * Prefer to schedule the task that ran last (when we did
23 * wake-preempt) as that likely will touch the same data, increases
26 SCHED_FEAT(LAST_BUDDY, true)
29 * Consider buddies to be cache hot, decreases the likelyness of a
30 * cache buddy being migrated away, increases cache locality.
32 SCHED_FEAT(CACHE_HOT_BUDDY, true)
35 * Allow wakeup-time preemption of the current task:
37 SCHED_FEAT(WAKEUP_PREEMPTION, true)
40 * Use arch dependent cpu capacity functions
42 SCHED_FEAT(ARCH_CAPACITY, true)
44 SCHED_FEAT(HRTICK, false)
45 SCHED_FEAT(DOUBLE_TICK, false)
46 SCHED_FEAT(LB_BIAS, true)
49 * Decrement CPU capacity based on time not spent running tasks
51 SCHED_FEAT(NONTASK_CAPACITY, true)
53 #ifdef CONFIG_PREEMPT_RT_FULL
54 SCHED_FEAT(TTWU_QUEUE, false)
55 # ifdef CONFIG_PREEMPT_LAZY
56 SCHED_FEAT(PREEMPT_LAZY, true)
61 * Queue remote wakeups on the target CPU and process them
62 * using the scheduler IPI. Reduces rq->lock contention/bounces.
64 SCHED_FEAT(TTWU_QUEUE, true)
67 #ifdef HAVE_RT_PUSH_IPI
69 * In order to avoid a thundering herd attack of CPUs that are
70 * lowering their priorities at the same time, and there being
71 * a single CPU that has an RT task that can migrate and is waiting
72 * to run, where the other CPUs will try to take that CPUs
73 * rq lock and possibly create a large contention, sending an
74 * IPI to that CPU and let that CPU push the RT task to where
75 * it should go may be a better scenario.
77 SCHED_FEAT(RT_PUSH_IPI, true)
80 SCHED_FEAT(FORCE_SD_OVERLAP, false)
81 SCHED_FEAT(RT_RUNTIME_SHARE, true)
82 SCHED_FEAT(LB_MIN, false)
85 * Apply the automatic NUMA scheduling policy. Enabled automatically
86 * at runtime if running on a NUMA machine. Can be controlled via
89 #ifdef CONFIG_NUMA_BALANCING
90 SCHED_FEAT(NUMA, false)
93 * NUMA_FAVOUR_HIGHER will favor moving tasks towards nodes where a
94 * higher number of hinting faults are recorded during active load
97 SCHED_FEAT(NUMA_FAVOUR_HIGHER, true)
100 * NUMA_RESIST_LOWER will resist moving tasks towards nodes where a
101 * lower number of hinting faults have been recorded. As this has
102 * the potential to prevent a task ever migrating to a new node
103 * due to CPU overload it is disabled by default.
105 SCHED_FEAT(NUMA_RESIST_LOWER, false)