1. 26 Jun, 2006 1 commit
  2. 09 Apr, 2006 1 commit
  3. 25 Mar, 2006 1 commit
    • Jan Beulich's avatar
      [PATCH] x86_64: actively synchronize vmalloc area when registering certain callbacks · 8c914cb7
      Jan Beulich authored
      While the modular aspect of the respective i386 patch doesn't apply to
      x86-64 (as the top level page directory entry is shared between modules
      and the base kernel), handlers registered with register_die_notifier()
      are still under similar constraints for touching ioremap()ed or
      vmalloc()ed memory. The likelihood of this problem becoming visible is
      of course significantly lower, as the assigned virtual addresses would
      have to cross a 2**39 byte boundary. This is because the callback gets
      (a) in the page fault path before the top level page table propagation
      gets carried out (hence a fault to propagate the top level page table
      entry/entries mapping to module's code/data would nest infinitly) and
      (b) in the NMI path, where nested faults must absolutely not happen,
      since otherwise the IRET from the nested fault re-enables NMIs,
      potentially resulting in nested NMI occurences.
      Signed-off-by: default avatarJan Beulich <jbeulich@novell.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  4. 23 Mar, 2006 1 commit
    • Andrew Morton's avatar
      [PATCH] more for_each_cpu() conversions · 394e3902
      Andrew Morton authored
      When we stop allocating percpu memory for not-possible CPUs we must not touch
      the percpu data for not-possible CPUs at all.  The correct way of doing this
      is to test cpu_possible() or to use for_each_cpu().
      This patch is a kernel-wide sweep of all instances of NR_CPUS.  I found very
      few instances of this bug, if any.  But the patch converts lots of open-coded
      test to use the preferred helper macros.
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: David Howells <dhowells@redhat.com>
      Acked-by: default avatarKyle McMartin <kyle@parisc-linux.org>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Christian Zankel <chris@zankel.net>
      Cc: Philippe Elie <phil.el@wanadoo.fr>
      Cc: Nathan Scott <nathans@sgi.com>
      Cc: Jens Axboe <axboe@suse.de>
      Cc: Eric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  5. 17 Feb, 2006 1 commit
  6. 04 Feb, 2006 1 commit
  7. 11 Jan, 2006 2 commits
  8. 13 Sep, 2005 1 commit
  9. 12 Sep, 2005 2 commits
  10. 07 Sep, 2005 2 commits
  11. 05 Sep, 2005 1 commit
  12. 25 Jun, 2005 1 commit
  13. 17 May, 2005 2 commits
    • Andi Kleen's avatar
      [PATCH] x86_64: Collected NMI watchdog fixes. · 75152114
      Andi Kleen authored
      Collected NMI watchdog fixes.
      - Fix call of check_nmi_watchdog
      - Remove earlier move of check_nmi_watchdog to later.  It does not fix the
        race it was supposed to fix fully.
      - Remove unused P6 definitions
      - Add support for performance counter based watchdog on P4 systems.
        This allows to run it only once per second, which saves some CPU time.
        Previously it would run at 1000Hz, which was too much.
        Code ported from i386
        Make this the default on Intel systems.
      - Use check_nmi_watchdog with local APIC based nmi
      - Fix race in touch_nmi_watchdog
      - Fix bug that caused incorrect performance counters to be programmed in a
        few cases on K8.
      - Remove useless check for local APIC
      - Use local_t and per_cpu variables for per CPU data.
      - Keep other CPUs busy during check_nmi_watchdog to make sure they really
        tick when in lapic mode.
      - Only check CPUs that are actually online.
      - Various other fixes.
      - Fix fallback path when MSRs are unimplemented
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
    • Andi Kleen's avatar
      [PATCH] x86_64: Reduce NMI watchdog stack usage · ac6b931c
      Andi Kleen authored
      NR_CPUs can be quite big these days.  kmalloc the per CPU array instead of
      putting it onto the stack
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  14. 01 May, 2005 1 commit
    • Jack F Vogel's avatar
      [PATCH] check nmi watchdog is broken · 67701ae9
      Jack F Vogel authored
      A bug against an xSeries system showed up recently noting that the
      check_nmi_watchdog() test was failing.
      I have been investigating it and discovered in both i386 and x86_64 the
      recent change to the routine to use the cpu_callin_map has uncovered a
      problem.  Prior to that change, on an SMP box, the test was trivally
      passing because all cpu's were found to not yet be online, but now with the
      callin_map they are discovered, it goes on to test the counter and they
      have not yet begun to increment, so it announces a CPU is stuck and bails
      On all the systems I have access to test, the announcement of failure is
      also bougs...  by the time you can login and check /proc/interrupts, the
      NMI count is happily incrementing on all CPUs.  Its just that the test is
      being done too early.
      I have tried moving the call to the test around a bit, and it was always
      too early.  I finally hit on this proposed solution, it delays the routine
      via a late_initcall(), seems like the right solution to me.
      Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  15. 16 Apr, 2005 4 commits