Commit e17224bf authored by Nick Piggin's avatar Nick Piggin Committed by Linus Torvalds
Browse files

[PATCH] sched: less locking

During periodic load balancing, don't hold this runqueue's lock while
scanning remote runqueues, which can take a non trivial amount of time
especially on very large systems.

Holding the runqueue lock will only help to stabilise ->nr_running, however
this doesn't do much to help because tasks being woken will simply get held
up on the runqueue lock, so ->nr_running would not provide a really
accurate picture of runqueue load in that case anyway.

What's more, ->nr_running (and possibly the cpu_load averages) of remote
runqueues won't be stable anyway, so load balancing is always an inexact
Signed-off-by: default avatarNick Piggin <>
Acked-by: default avatarIngo Molnar <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent d6d5cfaf
......@@ -2075,7 +2075,6 @@ static int load_balance(int this_cpu, runqueue_t *this_rq,
int nr_moved, all_pinned = 0;
int active_balance = 0;
schedstat_inc(sd, lb_cnt[idle]);
group = find_busiest_group(sd, this_cpu, &imbalance, idle);
......@@ -2102,18 +2101,16 @@ static int load_balance(int this_cpu, runqueue_t *this_rq,
* still unbalanced. nr_moved simply stays zero, so it is
* correctly treated as an imbalance.
double_lock_balance(this_rq, busiest);
double_rq_lock(this_rq, busiest);
nr_moved = move_tasks(this_rq, this_cpu, busiest,
imbalance, sd, idle, &all_pinned);
double_rq_unlock(this_rq, busiest);
/* All tasks on this runqueue were pinned by CPU affinity */
if (unlikely(all_pinned))
goto out_balanced;
if (!nr_moved) {
schedstat_inc(sd, lb_failed[idle]);
......@@ -2156,8 +2153,6 @@ static int load_balance(int this_cpu, runqueue_t *this_rq,
return nr_moved;
schedstat_inc(sd, lb_balanced[idle]);
sd->nr_balance_failed = 0;
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment