1. 27 Jan, 2013 1 commit
    • Frederic Weisbecker's avatar
      cputime: Generic on-demand virtual cputime accounting · abf917cd
      Frederic Weisbecker authored
      If we want to stop the tick further idle, we need to be
      able to account the cputime without using the tick.
      Virtual based cputime accounting solves that problem by
      hooking into kernel/user boundaries.
      However implementing CONFIG_VIRT_CPU_ACCOUNTING require
      low level hooks and involves more overhead. But we already
      have a generic context tracking subsystem that is required
      for RCU needs by archs which plan to shut down the tick
      outside idle.
      This patch implements a generic virtual based cputime
      accounting that relies on these generic kernel/user hooks.
      There are some upsides of doing this:
      - This requires no arch code to implement CONFIG_VIRT_CPU_ACCOUNTING
      if context tracking is already built (already necessary for RCU in full
      tickless mode).
      - We can rely on the generic context tracking subsystem to dynamically
      (de)activate the hooks, so that we can switch anytime between virtual
      and tick based accounting. This way we don't have the overhead
      of the virtual accounting when the tick is running periodically.
      And one downside:
      - There is probably more overhead than a native virtual based cputime
      accounting. But this relies on hooks that are already set anyway.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
  2. 01 Oct, 2012 1 commit
    • Al Viro's avatar
      sanitize tsk_is_polling() · 16a80163
      Al Viro authored
      Make default just return 0.  The current default (checking
      TIF_POLLING_NRFLAG) is taken to architectures that need it;
      ones that don't do polling in their idle threads don't need
      to defined TIF_POLLING_NRFLAG at all.
      ia64 defined both TS_POLLING (used by its tsk_is_polling())
      and TIF_POLLING_NRFLAG (not used at all).  Killed the latter...
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
  3. 01 Jun, 2012 2 commits
  4. 08 May, 2012 1 commit
  5. 21 Nov, 2011 1 commit
  6. 23 Mar, 2011 1 commit
  7. 22 Mar, 2011 2 commits
  8. 14 May, 2010 1 commit
  9. 10 Jul, 2009 1 commit
  10. 06 Oct, 2008 1 commit
  11. 01 Aug, 2008 1 commit
    • Tony Luck's avatar
      [IA64] Move include/asm-ia64 to arch/ia64/include/asm · 7f30491c
      Tony Luck authored
      After moving the the include files there were a few clean-ups:
      1) Some files used #include <asm-ia64/xyz.h>, changed to <asm/xyz.h>
      2) Some comments alerted maintainers to look at various header files to
      make matching updates if certain code were to be changed. Updated these
      comments to use the new include paths.
      3) Some header files mentioned their own names in initial comments. Just
      deleted these self references.
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
  12. 25 Jul, 2008 1 commit
  13. 01 May, 2008 1 commit
  14. 30 Apr, 2008 1 commit
  15. 20 Feb, 2008 1 commit
    • Hidetoshi Seto's avatar
      [IA64] VIRT_CPU_ACCOUNTING (accurate cpu time accounting) · b64f34cd
      Hidetoshi Seto authored
      This patch implements VIRT_CPU_ACCOUNTING for ia64,
      which enable us to use more accurate cpu time accounting.
      The VIRT_CPU_ACCOUNTING is an item of kernel config, which s390
      and powerpc arch have.  By turning this config on, these archs
      change the mechanism of cpu time accounting from tick-sampling
      based one to state-transition based one.
      The state-transition based accounting is done by checking time
      (cycle counter in processor) at every state-transition point,
      such as entrance/exit of kernel, interrupt, softirq etc.
      The difference between point to point is the actual time consumed
      during in the state. There is no doubt about that this value is
      more accurate than that of tick-sampling based accounting.
      Signed-off-by: default avatarHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
  16. 08 Feb, 2008 2 commits
  17. 31 Jul, 2007 1 commit
  18. 09 May, 2007 1 commit
  19. 08 May, 2007 1 commit
  20. 05 Feb, 2007 1 commit
  21. 13 Dec, 2006 1 commit
    • Rafael J. Wysocki's avatar
      [PATCH] PM: Fix SMP races in the freezer · 8a102eed
      Rafael J. Wysocki authored
      Currently, to tell a task that it should go to the refrigerator, we set the
      PF_FREEZE flag for it and send a fake signal to it.  Unfortunately there
      are two SMP-related problems with this approach.  First, a task running on
      another CPU may be updating its flags while the freezer attempts to set
      PF_FREEZE for it and this may leave the task's flags in an inconsistent
      state.  Second, there is a potential race between freeze_process() and
      refrigerator() in which freeze_process() running on one CPU is reading a
      task's PF_FREEZE flag while refrigerator() running on another CPU has just
      set PF_FROZEN for the same task and attempts to reset PF_FREEZE for it.  If
      the refrigerator wins the race, freeze_process() will state that PF_FREEZE
      hasn't been set for the task and will set it unnecessarily, so the task
      will go to the refrigerator once again after it's been thawed.
      To solve first of these problems we need to stop using PF_FREEZE to tell
      tasks that they should go to the refrigerator.  Instead, we can introduce a
      special TIF_*** flag and use it for this purpose, since it is allowed to
      change the other tasks' TIF_*** flags and there are special calls for it.
      To avoid the freeze_process()-refrigerator() race we can make
      freeze_process() to always check the task's PF_FROZEN flag after it's read
      its "freeze" flag.  We should also make sure that refrigerator() will
      always reset the task's "freeze" flag after it's set PF_FROZEN for it.
      Signed-off-by: default avatarRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: default avatarPavel Machek <pavel@ucw.cz>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  22. 03 Jul, 2006 1 commit
  23. 26 Jun, 2006 1 commit
    • Andi Kleen's avatar
      [PATCH] i386/x86-64/ia64: Move polling flag into thread_info_status · 495ab9c0
      Andi Kleen authored
      During some profiling I noticed that default_idle causes a lot of
      memory traffic. I think that is caused by the atomic operations
      to clear/set the polling flag in thread_info. There is actually
      no reason to make this atomic - only the idle thread does it
      to itself, other CPUs only read it. So I moved it into ti->status.
      Converted i386/x86-64/ia64 for now because that was the easiest
      way to fix ACPI which also manipulates these flags in its idle
      Cc: Nick Piggin <npiggin@novell.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Len Brown <len.brown@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  24. 27 Apr, 2006 1 commit
    • Cliff Wickman's avatar
      [IA64] enable dumps to capture second page of kernel stack · 1df57c0c
      Cliff Wickman authored
      In SLES10 (2.6.16) crash dumping (in my experience, LKCD) is unable to
      capture the second page of the 2-page task/stack allocation.
      This is particularly troublesome for dump analysis, as the stack traceback
      cannot be done.
        (A similar convention is probably needed throughout the kernel to make
         kernel multi-page allocations detectable for dumping)
      Multi-page kernel allocations are represented by the single page structure
      associated with the first page of the allocation.  The page structures
      associated with the other pages are unintialized.
      If the dumper is selecting only kernel pages it has no way to identify
      any but the first page of the allocation.
      The fix is to make the task/stack allocation a compound page.
      Signed-off-by: default avatarCliff Wickman <cpw@sgi.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
  25. 26 Jan, 2006 2 commits
    • Brent Casavant's avatar
      [IA64] hooks to wait for mmio writes to drain when migrating processes · e08e6c52
      Brent Casavant authored
      On SN2, MMIO writes which are issued from separate processors are not
      guaranteed to arrive in any particular order at the IO hardware.  When
      performing such writes from the kernel this is not a problem, as a
      kernel thread will not migrate to another CPU during execution, and
      mmiowb() calls can guarantee write ordering when control of the IO
      resource is allowed to move between threads.
      However, when MMIO writes can be performed from user space (e.g. DRM)
      there are no such guarantees and mechanisms, as the process may
      context-switch at any time, and may migrate to a different CPU as part
      of the switch.  For such programs/hardware to operate correctly, it is
      required that the MMIO writes from the old CPU be accepted by the IO
      hardware before subsequent writes from the new CPU can be issued.
      The following patch implements this behavior on SN2 by waiting for a
      Shub register to indicate that these writes have been accepted.  This
      is placed in the context switch-in path, and only performs the wait
      when the newly scheduled task changes CPUs.
      Signed-off-by: default avatarPrarit Bhargava <prarit@sgi.com>
      Signed-off-by: default avatarBrent Casavant <bcasavan@sgi.com>
    • Keith Owens's avatar
      [IA64] Delete MCA/INIT sigdelayed code · b0a06623
      Keith Owens authored
      The only user of the MCA/INIT sigdelayed code (SGI's I/O probing) has
      moved from the kernel into SAL.  Delete the MCA/INIT sigdelayed code.
      Signed-off-by: default avatarKeith Owens <kaos@sgi.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
  26. 13 Jan, 2006 1 commit
  27. 12 Jan, 2006 1 commit
  28. 13 Sep, 2005 1 commit
  29. 11 Sep, 2005 1 commit
  30. 09 Sep, 2005 1 commit
  31. 23 Jun, 2005 1 commit
    • Jesper Juhl's avatar
      [PATCH] streamline preempt_count type across archs · dcd497f9
      Jesper Juhl authored
      The preempt_count member of struct thread_info is currently either defined
      as int, unsigned int or __s32 depending on arch.  This patch makes the type
      of preempt_count an int on all archs.
      Having preempt_count be an unsigned type prevents the catching of
      preempt_count < 0 bugs, and using int on some archs and __s32 on others is
      not exactely "neat" - much nicer when it's just int all over.
      A previous version of this patch was already ACK'ed by Robert Love, and the
      only change in this version of the patch compared to the one he ACK'ed is
      that this one also makes sure the preempt_count member is consistently
      Signed-off-by: default avatarJesper Juhl <juhl-lkml@dif.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  32. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      Let it rip!