- Apr 18, 2011
-
-
Linus Torvalds authored
next_pidmap() just quietly accepted whatever 'last' pid that was passed in, which is not all that safe when one of the users is /proc. Admittedly the proc code should do some sanity checking on the range (and that will be the next commit), but that doesn't mean that the helper functions should just do that pidmap pointer arithmetic without checking the range of its arguments. So clamp 'last' to PID_MAX_LIMIT. The fact that we then do "last+1" doesn't really matter, the for-loop does check against the end of the pidmap array properly (it's only the actual pointer arithmetic overflow case we need to worry about, and going one bit beyond isn't going to overflow). [ Use PID_MAX_LIMIT rather than pid_max as per Eric Biederman ] Reported-by:
Tavis Ormandy <taviso@cmpxchg8b.com> Analyzed-by:
Robert Święcki <robert@swiecki.net> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Pavel Emelyanov <xemul@openvz.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Richard Cochran authored
A dynamic posix clock is protected from asynchronous removal by a mutex. However, using a mutex has the unwanted effect that a long running clock operation in one process will unnecessarily block other processes. For example, one process might call read() to get an external time stamp coming in at one pulse per second. A second process calling clock_gettime would have to wait for almost a whole second. This patch fixes the issue by using a reader/writer semaphore instead of a mutex. Signed-off-by:
Richard Cochran <richard.cochran@omicron.at> Cc: John Stultz <john.stultz@linaro.org> Link: http://lkml.kernel.org/r/%3C20110330132421.GA31771%40riccoc20.at.omicron.at%3E Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Apr 16, 2011
-
-
Jens Axboe authored
It's a pretty close match to what we had before - the timer triggering would mean that nobody unplugged the plug in due time, in the new scheme this matches very closely what the schedule() unplug now is. It's essentially the difference between an explicit unplug (IO unplug) or an implicit unplug (timer unplug, we scheduled with pending IO queued). Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
-
Jens Axboe authored
Linus correctly observes that the most important dispatch cases are now done from kblockd, this isn't ideal for latency reasons. The original reason for switching dispatches out-of-line was to avoid too deep a stack, so by _only_ letting the "accidental" flush directly in schedule() be guarded by offload to kblockd, we should be able to get the best of both worlds. So add a blk_schedule_flush_plug() that offloads to kblockd, and only use that from the schedule() path. Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
-
- Apr 15, 2011
-
-
Darren Hart authored
The FLAGS_HAS_TIMEOUT flag was not getting set, causing the restart_block to restart futex_wait() without a timeout after a signal. Commit b41277dc in 2.6.38 introduced the regression by accidentally removing the the FLAGS_HAS_TIMEOUT assignment from futex_wait() during the setup of the restart block. Restore the originaly behavior. Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=32922 Reported-by:
Tim Smith <tsmith201104@yahoo.com> Reported-by:
Torsten Hilbrich <torsten.hilbrich@secunet.com> Signed-off-by:
Darren Hart <dvhart@linux.intel.com> Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: John Kacur <jkacur@redhat.com> Cc: stable@kernel.org Link: http://lkml.kernel.org/r/%3Cdaac0eb3af607f72b9a4d3126b2ba8fb5ed3b883.1302820917.git.dvhart%40linux.intel.com%3E Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Apr 13, 2011
-
-
Linus Torvalds authored
We really only want to unplug the pending IO when the process actually goes to sleep. So move the test for flushing the plug up to the place where we actually deactivate the task - where we have properly checked for preemption and for the process really sleeping. Acked-by:
Jens Axboe <jaxboe@fusionio.com> Acked-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 12, 2011
-
-
Jens Axboe authored
It was removed with the on-stack plugging, readd it and track the depth of requests added when flushing the plug. Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
-
Jens Axboe authored
We no longer have an unplug timer running, so no point in keeping the trace point. Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
-
- Apr 11, 2011
-
-
Shriram Rajagopalan authored
Make XEN_SAVE_RESTORE select HIBERNATE_CALLBACKS. Remove XEN_SAVE_RESTORE dependency from PM_SLEEP. Signed-off-by:
Shriram Rajagopalan <rshriram@cs.ubc.ca> Acked-by:
Ian Campbell <ian.campbell@citrix.com> Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-
Rafael J. Wysocki authored
Xen save/restore is going to use hibernate device callbacks for quiescing devices and putting them back to normal operations and it would need to select CONFIG_HIBERNATION for this purpose. However, that also would cause the hibernate interfaces for user space to be enabled, which might confuse user space, because the Xen kernels don't support hibernation. Moreover, it would be wasteful, as it would make the Xen kernels include a substantial amount of code that they would never use. To address this issue introduce new power management Kconfig option CONFIG_HIBERNATE_CALLBACKS, such that it will only select the code that is necessary for the hibernate device callbacks to work and make CONFIG_HIBERNATION select it. Then, Xen save/restore will be able to select CONFIG_HIBERNATE_CALLBACKS without dragging the entire hibernate code along with it. Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl> Tested-by:
Shriram Rajagopalan <rshriram@cs.ubc.ca>
-
Ken Chen authored
The scheduler load balancer has specific code to deal with cases of unbalanced system due to lots of unmovable tasks (for example because of hard CPU affinity). In those situation, it excludes the busiest CPU that has pinned tasks for load balance consideration such that it can perform second 2nd load balance pass on the rest of the system. This all works as designed if there is only one cgroup in the system. However, when we have multiple cgroups, this logic has false positives and triggers multiple load balance passes despite there are actually no pinned tasks at all. The reason it has false positives is that the all pinned logic is deep in the lowest function of can_migrate_task() and is too low level: load_balance_fair() iterates each task group and calls balance_tasks() to migrate target load. Along the way, balance_tasks() will also set a all_pinned variable. Given that task-groups are iterated, this all_pinned variable is essentially the status of last group in the scanning process. Task group can have number of reasons that no load being migrated, none due to cpu affinity. However, this status bit is being propagated back up to the higher level load_balance(), which incorrectly think that no tasks were moved. It kick off the all pinned logic and start multiple passes attempt to move load onto puller CPU. To fix this, move the all_pinned aggregation up at the iterator level. This ensures that the status is aggregated over all task-groups, not just last one in the list. Signed-off-by:
Ken Chen <kenchen@google.com> Cc: stable@kernel.org Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/BANLkTi=ernzNawaR5tJZEsV_QVnfxqXmsQ@mail.gmail.com Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Ken Chen authored
In function find_busiest_group(), the sched-domain avg_load isn't calculated at all if there is a group imbalance within the domain. This will cause erroneous imbalance calculation. The reason is that calculate_imbalance() sees sds->avg_load = 0 and it will dump entire sds->max_load into imbalance variable, which is used later on to migrate entire load from busiest CPU to the puller CPU. This has two really bad effect: 1. stampede of task migration, and they won't be able to break out of the bad state because of positive feedback loop: large load delta -> heavier load migration -> larger imbalance and the cycle goes on. 2. severe imbalance in CPU queue depth. This causes really long scheduling latency blip which affects badly on application that has tight latency requirement. The fix is to have kernel calculate domain avg_load in both cases. This will ensure that imbalance calculation is always sensible and the target is usually half way between busiest and puller CPU. Signed-off-by:
Ken Chen <kenchen@google.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <stable@kernel.org> Link: http://lkml.kernel.org/r/20110408002322.3A0D812217F@elm.corp.google.com Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Stephane Eranian authored
There is a bug in perf_event_enable_on_exec() when cgroup events are active on a CPU: the cgroup events may be scheduled twice causing event state corruptions which eventually may lead to kernel panics. The reason is that the function needs to first schedule out the cgroup events, just like for the per-thread events. The cgroup event are scheduled back in automatically from the perf_event_context_sched_in() function. The patch also adds a WARN_ON_ONCE() is perf_cgroup_switch() to catch any bogus state. Signed-off-by:
Stephane Eranian <eranian@google.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110406005454.GA1062@quad Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Apr 08, 2011
-
-
Randy Dunlap authored
Fix erroneous syscall kernel-doc comments in kernel/signal.c. Reported-by:
Matt Fleming <matt@console-pimps.org> Signed-off-by:
Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 05, 2011
-
-
Peter Zijlstra authored
Instead of the possible multiple-evaluation of num_online_cpus() in rebalance_domains() that Linus reported, avoid it altogether in the normal case since it's implemented with a Hamming weight function over a cpu bitmask which can be darn expensive for those with big iron. This also makes it cleaner, smaller and documents the code. Reported-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1301991265.2225.12.camel@twins> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Apr 04, 2011
-
-
Randy Dunlap authored
Add kernel-doc to syscalls in signal.c. Signed-off-by:
Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Randy Dunlap authored
General coding style and comment fixes; no code changes: - Use multi-line-comment coding style. - Put some function signatures completely on one line. - Hyphenate some words. - Spell Posix as POSIX. - Correct typos & spellos in some comments. - Drop trailing whitespace. - End sentences with periods. Signed-off-by:
Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Richard Cochran authored
The ADJ_SETOFFSET bit added in commit 094aa188 ("ntp: Add ADJ_SETOFFSET mode bit") also introduced a way for any user to change the system time. Sneaky or buggy calls to adjtimex() could set ADJ_OFFSET_SS_READ | ADJ_SETOFFSET which would result in a successful call to timekeeping_inject_offset(). This patch fixes the issue by adding the capability check. Signed-off-by:
Richard Cochran <richard.cochran@omicron.at> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 02, 2011
-
-
Xiaotian Feng authored
The allocated cpumask should be freed in __setup_irq(). Signed-off-by:
Xiaotian Feng <dfeng@redhat.com> LKML-Reference: <1301744375-6812-1-git-send-email-dfeng@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Mar 31, 2011
-
-
Anton Blanchard authored
On ppc64 the crashkernel region almost always overlaps an area of firmware. This works fine except when using the sysfs interface to reduce the kdump region. If we free the firmware area we are guaranteed to crash. Rename free_reserved_phys_range to crash_free_reserved_phys_range and make it a weak function so we can override it. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Lucas De Marchi authored
Fixes generated by 'codespell' and manually reviewed. Signed-off-by:
Lucas De Marchi <lucas.demarchi@profusion.mobi>
-
Peter Zijlstra authored
sys_perf_event_open() had an imbalance in the number of task refs it took causing memory leakage Cc: Jiri Olsa <jolsa@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: stable@kernel.org # .37+ Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Frederic Weisbecker authored
Ensure we allow 512 kiB + 1 page for user control without assuming a 4096 bytes page size. Reported-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> Cc: <stable@kernel.org> LKML-Reference: <1301535209-9679-1-git-send-email-fweisbec@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Sisir Koppaka authored
The interval for checking scheduling domains if they are due to be balanced currently depends on boot state NR_CPUS, which may not accurately reflect the number of online CPUs at the time of check. Thus replace NR_CPUS with num_online_cpus(). (ed: Should only affect those who set NR_CPUS really high, such as 4096 or so :-) Signed-off-by:
Sisir Koppaka <sisir.koppaka@gmail.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <AANLkTikqHWid2Q93F5U5Qw5snJH8C5PXoa7J6=6hYO94@mail.gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Dario Faggioli authored
sched_setscheduler() (in sched.c) is called in order of changing the scheduling policy and/or the real-time priority of a task. Thus, if we find out that neither of those are actually being modified, it is possible to return earlier and save the overhead of a full deactivate+activate cycle of the task in question. Beside that, if we have more than one SCHED_FIFO task with the same priority on the same rq (which means they share the same priority queue) having one of them changing its position in the priority queue because of a sched_setscheduler (as it happens by means of the deactivate+activate) that does not actually change the priority violates POSIX which states, for SCHED_FIFO: "If a thread whose policy or priority has been modified by pthread_setschedprio() is a running thread or is runnable, the effect on its position in the thread list depends on the direction of the modification, as follows: a. <...> b. If the priority is unchanged, the thread does not change position in the thread list. c. <...>" http://pubs.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_08.html (ed: And the POSIX specification here does, briefly and somewhat unexpectedly, match what common sense tells us as well. ) Signed-off-by:
Dario Faggioli <raistlin@linux.it> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1300971618.3960.82.camel@Palantir> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Mar 30, 2011
-
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Mar 29, 2011
-
-
Stephen Rothwell authored
Fixes these errors: kernel/irq/chip.c: In function 'handle_edge_eoi_irq': kernel/irq/chip.c:517: warning: label 'out_unlock' defined but not used kernel/irq/chip.c:503: error: label 'out_eoi' used but not defined Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Thomas Gleixner authored
Reported-by:
<michael@ellerman.id.au> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: linuxppc-dev@lists.ozlabs.org
-
Thomas Gleixner authored
All users converted to new interface. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
The only subtle difference is that alpha uses ACTUAL_NR_IRQS and prints the IRQF_DISABLED flag. Change the generic implementation to deal with ACTUAL_NR_IRQS if defined. The IRQF_DISABLED printing is pointless, as we nowadays run all interrupts with irqs disabled. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
The late night fixup missed to convert the data type from irq_desc to irq_data, which results in a harmless but annoying warning. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Mar 28, 2011
-
-
Thomas Gleixner authored
I missed the CONFIG_GENERIC_PENDING_IRQ dependency in the affinity related functions and the IRQ_LEVEL propagation into irq_data state. Did not pop up on my main test platforms. :( Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
David Daney <ddaney@caviumnetworks.com>
-
Roland Dreier authored
Commit da48524e ("Prevent rt_sigqueueinfo and rt_tgsigqueueinfo from spoofing the signal code") made the check on si_code too strict. There are several legitimate places where glibc wants to queue a negative si_code different from SI_QUEUE: - This was first noticed with glibc's aio implementation, which wants to queue a signal with si_code SI_ASYNCIO; the current kernel causes glibc's tst-aio4 test to fail because rt_sigqueueinfo() fails with EPERM. - Further examination of the glibc source shows that getaddrinfo_a() wants to use SI_ASYNCNL (which the kernel does not even define). The timer_create() fallback code wants to queue signals with SI_TIMER. As suggested by Oleg Nesterov <oleg@redhat.com>, loosen the check to forbid only the problematic SI_TKILL case. Reported-by:
Klaus Dittrich <kladit@arcor.de> Acked-by:
Julien Tinnes <jln@google.com> Cc: <stable@kernel.org> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Thomas Gleixner authored
Sigh, I'm overworked. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Randy Dunlap authored
Fix new irq-related kernel-doc warnings in 2.6.38: Warning(kernel/irq/manage.c:149): No description found for parameter 'mask' Warning(kernel/irq/manage.c:149): Excess function parameter 'cpumask' description in 'irq_set_affinity' Warning(include/linux/irq.h:161): No description found for parameter 'state_use_accessors' Warning(include/linux/irq.h:161): Excess struct/union/enum/typedef member 'state_use_accessor' description in 'irq_data' Signed-off-by:
Randy Dunlap <randy.dunlap@oracle.com> LKML-Reference: <20110318093356.b939558d.randy.dunlap@oracle.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Last user gone. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
This is a replacment for the cell flow handler which is in the way of cleanups. Must be selected to avoid general bloat. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
We really need these flags for some of the interrupt chips. Move it from internal state to irq_data and provide proper accessors. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: David Daney <ddaney@caviumnetworks.com>
-
- Mar 27, 2011
-
-
David Daney authored
The .irq_cpu_online() and .irq_cpu_offline() functions may need to adjust affinity, but they are called with the descriptor lock held. Create __irq_set_affinity_locked() which is called with the lock held. Make irq_set_affinity() just a wrapper that acquires the lock. [ tglx: Changed the argument to irq_data, added a !desc check and moved the !irq_set_affinity check where it belongs ] Signed-off-by:
David Daney <ddaney@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: ralf@linux-mips.org LKML-Reference: <1301081931-11240-4-git-send-email-ddaney@caviumnetworks.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-