]> git.baikalelectronics.ru Git - kernel.git/commit
sched: Fix more load-balancing fallout
authorPeter Zijlstra <a.p.zijlstra@chello.nl>
Tue, 17 Apr 2012 11:38:40 +0000 (13:38 +0200)
committerIngo Molnar <mingo@kernel.org>
Thu, 26 Apr 2012 10:54:52 +0000 (12:54 +0200)
commit337122919ccb8aca9f99915f833b707cc4d5c3d5
tree92e9a3368e75b94486dc54f2688453fed17d4eed
parentbf90bdb1cb2e9e3ed77c9a8f68938bffc2fdd839
sched: Fix more load-balancing fallout

Commits 799848d49b05 ("sched: Ditch per cgroup task lists for
load-balancing") and c0de4a6c5 ("sched: Fix load-balance wreckage")
left some more wreckage.

By setting loop_max unconditionally to ->nr_running load-balancing
could take a lot of time on very long runqueues (hackbench!). So keep
the sysctl as max limit of the amount of tasks we'll iterate.

Furthermore, the min load filter for migration completely fails with
cgroups since inequality in per-cpu state can easily lead to such
small loads :/

Furthermore the change to add new tasks to the tail of the queue
instead of the head seems to have some effect.. not quite sure I
understand why.

Combined these fixes solve the huge hackbench regression reported by
Tim when hackbench is ran in a cgroup.

Reported-by: Tim Chen <tim.c.chen@linux.intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1335365763.28150.267.camel@twins
[ got rid of the CONFIG_PREEMPT tuning and made small readability edits ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c
kernel/sched/features.h