Commit b6136773 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds
Browse files

[PATCH] schedule_on_each_cpu(): reduce kmalloc() size

schedule_on_each_cpu() presently does a large kmalloc - 96 kbytes on 1024 CPU

Rework it so that we do one 8192-byte allocation and then a pile of tiny ones,
via alloc_percpu().  This has a much higher chance of success (100% in the
current VM).

This also has the effect of reducing the memory requirements from NR_CPUS*n to

Cc: Christoph Lameter <>
Cc: Andi Kleen <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 232acbcf
......@@ -428,22 +428,34 @@ int schedule_delayed_work_on(int cpu,
return ret;
int schedule_on_each_cpu(void (*func) (void *info), void *info)
* schedule_on_each_cpu - call a function on each online CPU from keventd
* @func: the function to call
* @info: a pointer to pass to func()
* Returns zero on success.
* Returns -ve errno on failure.
* Appears to be racy against CPU hotplug.
* schedule_on_each_cpu() is very slow.
int schedule_on_each_cpu(void (*func)(void *info), void *info)
int cpu;
struct work_struct *work;
struct work_struct *works;
work = kmalloc(NR_CPUS * sizeof(struct work_struct), GFP_KERNEL);
if (!work)
works = alloc_percpu(struct work_struct);
if (!works)
return -ENOMEM;
for_each_online_cpu(cpu) {
INIT_WORK(work + cpu, func, info);
INIT_WORK(per_cpu_ptr(works, cpu), func, info);
__queue_work(per_cpu_ptr(keventd_wq->cpu_wq, cpu),
work + cpu);
per_cpu_ptr(works, cpu));
return 0;
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment