- May 23, 2011
-
-
Mandeep Singh Baines authored
Before the conversion of the NMI watchdog to perf event, the watchdog timeout was 5 seconds. Now it is 60 seconds. For my particular application, netbooks, 5 seconds was a better timeout. With a short timeout, we catch faults earlier and are able to send back a panic. With a 60 second timeout, the user is unlikely to wait and will instead hit the power button, causing us to lose the panic info. This change configures the NMI period to watchdog_thresh and sets the softlockup_thresh to watchdog_thresh * 2. In addition, watchdog_thresh was reduced to 10 seconds as suggested by Ingo Molnar. Signed-off-by:
Mandeep Singh Baines <msb@chromium.org> Cc: Marcin Slusarz <marcin.slusarz@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/1306127423-3347-4-git-send-email-msb@chromium.org Signed-off-by:
Ingo Molnar <mingo@elte.hu> LKML-Reference: <20110517071642.GF22305@elte.hu>
-
Mandeep Singh Baines authored
This restores the previous behavior of softlock_thresh. Currently, setting watchdog_thresh to zero causes the watchdog kthreads to consume a lot of CPU. In addition, the logic of proc_dowatchdog_thresh and proc_dowatchdog_enabled has been factored into proc_dowatchdog. Signed-off-by:
Mandeep Singh Baines <msb@chromium.org> Cc: Marcin Slusarz <marcin.slusarz@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/1306127423-3347-3-git-send-email-msb@chromium.org Signed-off-by:
Ingo Molnar <mingo@elte.hu> LKML-Reference: <20110517071018.GE22305@elte.hu>
-
Mandeep Singh Baines authored
Don't take any action on an unsuccessful write to /proc. Signed-off-by:
Mandeep Singh Baines <msb@chromium.org> Cc: Marcin Slusarz <marcin.slusarz@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/1306127423-3347-2-git-send-email-msb@chromium.org Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Mandeep Singh Baines authored
In get_sample_period(), softlockup_thresh is integer divided by 5 before the multiplication by NSEC_PER_SEC. This results in softlockup_thresh being rounded down to the nearest integer multiple of 5. For example, a softlockup_thresh of 4 rounds down to 0. Signed-off-by:
Mandeep Singh Baines <msb@chromium.org> Cc: Marcin Slusarz <marcin.slusarz@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/1306127423-3347-1-git-send-email-msb@chromium.org Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- May 20, 2011
-
-
Kevin Cernekee authored
When building with: CONFIG_64BIT=y CONFIG_MIPS32_COMPAT=y CONFIG_COMPAT=y CONFIG_MIPS32_O32=y CONFIG_MIPS32_N32=y CONFIG_SYSVIPC is not set (and implicitly: CONFIG_SYSVIPC_COMPAT is not set) the final link fails with unresolved symbols for: compat_sys_semctl, compat_sys_msgsnd, compat_sys_msgrcv, compat_sys_shmctl, compat_sys_msgctl, compat_sys_semtimedop The fix is to add cond_syscall declarations for all syscalls in ipc/compat.c Signed-off-by:
Kevin Cernekee <cernekee@gmail.com> Acked-by:
Ralf Baechle <ralf@linux-mips.org> Acked-by:
Arnd Bergmann <arnd@arndb.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Daniel Hellstrom authored
Signed-off-by:
Daniel Hellstrom <daniel@gaisler.com> Reported-by:
Peter Zijlstra <peterz@infradead.org> Acked-by:
Peter Zijlstra <peterz@infradead.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Linus Torvalds authored
Commit e66eed65 ("list: remove prefetching from regular list iterators") removed the include of prefetch.h from list.h, which uncovered several cases that had apparently relied on that rather obscure header file dependency. So this fixes things up a bit, using grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]') grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]') to guide us in finding files that either need <linux/prefetch.h> inclusion, or have it despite not needing it. There are more of them around (mostly network drivers), but this gets many core ones. Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Thomas Gleixner authored
unsigned long is not 64bit on 32bit machine. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Steven Rostedt authored
A new utility function (core_kernel_data()) is used to determine if a passed in address is part of core kernel data or not. It may or may not return true for RO data, but this utility must work for RW data. Thus both _sdata and _edata must be defined and continuous, without .init sections that may later be freed and replaced by volatile memory (memory that can be freed). This utility function is used to determine if data is safe from ever being freed. Thus it should return true for all RW global data that is not in a module or has been allocated, or false otherwise. Also change core_kernel_data() back to the more precise _sdata condition and document the function. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Acked-by:
Ralf Baechle <ralf@linux-mips.org> Acked-by:
Hirokazu Takata <takata@linux-m32r.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: linux-m68k@lists.linux-m68k.org Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Helge Deller <deller@gmx.de> Cc: JamesE.J.Bottomley <jejb@parisc-linux.org> Link: http://lkml.kernel.org/r/1305855298.1465.19.camel@gandalf.stny.rr.com Signed-off-by:
Ingo Molnar <mingo@elte.hu> ---- arch/alpha/kernel/vmlinux.lds.S | 1 + arch/m32r/kernel/vmlinux.lds.S | 1 + arch/m68k/kernel/vmlinux-std.lds | 2 ++ arch/m68k/kernel/vmlinux-sun3.lds | 1 + arch/mips/kernel/vmlinux.lds.S | 1 + arch/parisc/kernel/vmlinux.lds.S | 3 +++ kernel/extable.c | 12 +++++++++++- 7 files changed, 20 insertions(+), 1 deletion(-)
-
- May 19, 2011
-
-
Ingo Molnar authored
Some architectures such as Alpha do not define _sdata but _data: kernel/built-in.o: In function `core_kernel_data': kernel/extable.c:77: undefined reference to `_sdata' So expand the scope of the data range to the text addresses too, this might be more correct anyway because this way we can cover readonly variables as well. Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/n/tip-i878c8a0e0g0ep4v7i6vxnhz@git.kernel.org Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Paul E. McKenney authored
This reverts commit e59fb312. This reversion was due to (extreme) boot-time slowdowns on SPARC seen by Yinghai Lu and on x86 by Ingo . This is a non-trivial reversion due to intervening commits. Conflicts: Documentation/RCU/trace.txt kernel/rcutree.c Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
Some ARM SoCs have clock event devices which have their frequency modified due to frequency scaling. Provide an interface which allows to reconfigure an active device. After reconfiguration reprogram the current pending event. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: LAK <linux-arm-kernel@lists.infradead.org> Cc: John Stultz <john.stultz@linaro.org> Acked-by:
Linus Walleij <linus.walleij@linaro.org> Reviewed-by:
Ingo Molnar <mingo@elte.hu> Link: http://lkml.kernel.org/r/%3C20110518210136.437459958%40linutronix.de%3E
-
Thomas Gleixner authored
All clockevent devices have the same open coded initialization functions. Provide an interface which does all necessary initialization in the core code. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Reviewed-by:
Ingo Molnar <mingo@elte.hu> Link: http://lkml.kernel.org/r/%3C20110518210136.331975870%40linutronix.de%3E
-
Thomas Gleixner authored
Slow clocksources can have a way longer sleep time than 5 seconds and even fast ones can easily cope with 600 seconds and still maintain proper accuracy. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Reviewed-by:
Ingo Molnar <mingo@elte.hu> Link: http://lkml.kernel.org/r/%3C20110518210136.109811585%40linutronix.de%3E
-
Jonathan Cameron authored
Signed-off-by:
Jonathan Cameron <jic23@cam.ac.uk> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au>
-
Alessio Igor Bogani authored
The function is_exported() with its helper function lookup_symbol() are used to verify if a provided symbol is effectively exported by the kernel or by the modules. Now that both have their symbols sorted we can replace a linear search with a binary search which provide a considerably speed-up. This work was supported by a hardware donation from the CE Linux Forum. Signed-off-by:
Alessio Igor Bogani <abogani@kernel.org> Acked-by:
Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au>
-
Alessio Igor Bogani authored
Takes advantage of the order and locates symbols using binary search. This work was supported by a hardware donation from the CE Linux Forum. Signed-off-by:
Alessio Igor Bogani <abogani@kernel.org> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au> Tested-by:
Dirk Behme <dirk.behme@googlemail.com>
-
Rusty Russell authored
Instead of having a callback function for each symbol in the kernel, have a callback for each array of symbols. This eases the logic when we move to sorted symbols and binary search. Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au> Signed-off-by:
Alessio Igor Bogani <abogani@kernel.org>
-
Jan Glauber authored
Split the unprotect function into a function per section to make the code more readable and add the missing static declaration. Signed-off-by:
Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au>
-
Jan Glauber authored
While debugging I stumbled over two problems in the code that protects module pages. First issue is that disabling the protection before freeing init or unload of a module is not symmetric with the enablement. For instance, if pages are set to RO the page range from module_core to module_core + core_ro_size is protected. If a module is unloaded the page range from module_core to module_core + core_size is set back to RW. So pages that were not set to RO are also changed to RW. This is not critical but IMHO it should be symmetric. Second issue is that while set_memory_rw & set_memory_ro are used for RO/RW changes only set_memory_nx is involved for NX/X. One would await that the inverse function is called when the NX protection should be removed, which is not the case here, unless I'm missing something. Signed-off-by:
Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au>
-
Jan Glauber authored
Reset mod->init_ro_size to zero after the init part of a module is unloaded. Otherwise we need to check if module->init is NULL in the unprotect functions in the next patch. Signed-off-by:
Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au>
-
Daniel J Blueman authored
Fix function prototype to be ANSI-C compliant, consistent with other function prototypes, addressing a sparse warning. Signed-off-by:
Daniel J Blueman <daniel.blueman@gmail.com> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au>
-
Dmitry Torokhov authored
On m68k natural alignment is 2-byte boundary but we are trying to align structures in __modver section on sizeof(void *) boundary. This causes trouble when we try to access elements in this section in array-like fashion when create "version" attributes for built-in modules. Moreover, as DaveM said, we can't reliably put structures into independent objects, put them into a special section, and then expect array access over them (via the section boundaries) after linking the objects together to just "work" due to variable alignment choices in different situations. The only solution that seems to work reliably is to make an array of plain pointers to the objects in question and put those pointers in the special section. Reported-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Dmitry Torokhov <dtor@vmware.com> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au>
-
- May 18, 2011
-
-
Steven Rostedt authored
Add some basic sanity tests for multiple users of the function tracer at startup. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
Since users of the function tracer can now pick and choose which functions they want to trace agnostically from other users of the function tracer, we need to pass the ops struct to the ftrace_set_filter() functions. The functions ftrace_set_global_filter() and ftrace_set_global_notrace() is added to keep the old filter functions which are used to modify the generic function tracers. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
Now that functions may be selected individually, it only makes sense that we should allow dynamically allocated trace structures to be traced. This will allow perf to allocate a ftrace_ops structure at runtime and use it to pick and choose which functions that structure will trace. Note, a dynamically allocated ftrace_ops will always be called indirectly instead of being called directly from the mcount in entry.S. This is because there's no safe way to prevent mcount from being preempted before calling the function, unless we modify every entry.S to do so (not likely). Thus, dynamically allocated functions will now be called by the ftrace_ops_list_func() that loops through the ops that are allocated if there are more than one op allocated at a time. This loop is protected with a preempt_disable. To determine if an ftrace_ops structure is allocated or not, a new util function was added to the kernel/extable.c called core_kernel_data(), which returns 1 if the address is between _sdata and _edata. Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
ftrace_ops that are registered to trace functions can now be agnostic to each other in respect to what functions they trace. Each ops has their own hash of the functions they want to trace and a hash to what they do not want to trace. A empty hash for the functions they want to trace denotes all functions should be traced that are not in the notrace hash. Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
When a hash is modified and might be in use, we need to perform a schedule RCU operation on it, as the hashes will soon be used directly in the function tracer callback. Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
This is a step towards each ops structure defining its own set of functions to trace. As the current code with pid's and such are specific to the global_ops, it is restructured to be used with the global ops. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
In order to allow different ops to enable different functions, the ftrace_startup() and ftrace_shutdown() functions need the ops parameter passed to them. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
Add the enabled_functions file that is used to show all the functions that have been enabled for tracing as well as their ref counts. This helps seeing if any function has been registered and what functions are being traced. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
Every function has its own record that stores the instruction pointer and flags for the function to be traced. There are only two flags: enabled and free. The enabled flag states that tracing for the function has been enabled (actively traced), and the free flag states that the record no longer points to a function and can be used by new functions (loaded modules). These flags are now moved to the MSB of the flags (actually just the top 32bits). The rest of the bits (30 bits) are now used as a ref counter. Everytime a tracer register functions to trace, those functions will have its counter incremented. When tracing is enabled, to determine if a function should be traced, the counter is examined, and if it is non-zero it is set to trace. When a ftrace_ops is registered to trace functions, its hashes are examined. If the ftrace_ops filter_hash count is zero, then all functions are set to be traced, otherwise only the functions in the hash are to be traced. The exception to this is if a function is also in the ftrace_ops notrace_hash. Then that function's counter is not incremented for this ftrace_ops. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
When filtering, allocate a hash to insert the function records. After the filtering is complete, assign it to the ftrace_ops structure. This allows the ftrace_ops structure to have a much smaller array of hash buckets instead of wasting a lot of memory. A read only empty_hash is created to be the minimum size that any ftrace_ops can point to. When a new hash is created, it has the following steps: o Allocate a default hash. o Walk the function records assigning the filtered records to the hash o Allocate a new hash with the appropriate size buckets o Move the entries from the default hash to the new hash. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
Combine the filter and notrace hashes to be accessed by a single entity, the global_ops. The global_ops is a ftrace_ops structure that is passed to different functions that can read or modify the filtering of the function tracer. The ftrace_ops structure was modified to hold a filter and notrace hashes so that later patches may allow each ftrace_ops to have its own set of rules to what functions may be filtered. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
When multiple users are allowed to have their own set of functions to trace, having the FTRACE_FL_FILTER flag will not be enough to handle the accounting of those users. Each user will need their own set of functions. Replace the FTRACE_FL_FILTER with a filter_hash instead. This is temporary until the rest of the function filtering accounting gets in. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
To prepare for the accounting system that will allow multiple users of the function tracer, having the FTRACE_FL_NOTRACE as a flag in the dyn_trace record does not make sense. All ftrace_ops will soon have a hash of functions they should trace and not trace. By making a global hash of functions not to trace makes this easier for the transition. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
Jonathan Cameron authored
Export handle_simple_irq, irq_modify_status, irq_alloc_descs, irq_free_descs and generic_handle_irq to allow their usage in modules. First user is IIO, which wants to be built modular, but needs to be able to create irq chips, allocate and configure interrupt descriptors and handle demultiplexing interrupts. [ tglx: Moved the uninlinig of generic_handle_irq to a separate patch ] Signed-off-by:
Jonathan Cameron <jic23@cam.ac.uk> Link: http://lkml.kernel.org/r/%3C1305711544-505-1-git-send-email-jic23%40cam.ac.uk%3E Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
generic_handle_irq() is missing a NULL pointer check for the result of irq_to_desc. This was a not a big problem, but we want to expose it to drivers, so we better have sanity checks in place. Add a return value as well, which indicates that the irq number was valid and the handler was invoked. Based on the pure code move from Jonathan Cameron. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Jonathan Cameron <jic23@cam.ac.uk>
-
Thomas Gleixner authored
kernel/irq/ is only built when CONFIG_GENERIC_HARDIRQS=y. So making code inside of kernel/irq/ conditional on CONFIG_GENERIC_HARDIRQS is pointless. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- May 17, 2011
-
-
Rafael J. Wysocki authored
If device drivers allocate substantial amounts of memory (above 1 MB) in their hibernate .freeze() callbacks (or in their legacy suspend callbcks during hibernation), the subsequent creation of hibernate image may fail due to the lack of memory. This is the case, because the drivers' .freeze() callbacks are executed after the hibernate memory preallocation has been carried out and the preallocated amount of memory may be too small to cover the new driver allocations. Unfortunately, the drivers' .prepare() callbacks also are executed after the hibernate memory preallocation has completed, so they are not suitable for allocating additional memory either. Thus the only way a driver can safely allocate memory during hibernation is to use a hibernate/suspend notifier. However, the notifiers are called before the freezing of user space and the drivers wanting to use them for allocating additional memory may not know how much memory needs to be allocated at that point. To let device drivers overcome this difficulty rework the hibernation sequence so that the memory preallocation is carried out after the drivers' .prepare() callbacks have been executed, so that the .prepare() callbacks can be used for allocating additional memory to be used by the drivers' .freeze() callbacks. Update documentation to match the new behavior of the code. Signed-off-by:
Rafael J. Wysocki <rjw@sisk.pl>
-