1. 30 Jan, 2008 3 commits
  2. 19 Oct, 2007 1 commit
  3. 17 Oct, 2007 3 commits
    • H. Peter Anvin's avatar
      x86: Create clflush() inline, remove hardcoded wbinvd · 6619a8fb
      H. Peter Anvin authored
      
      
      Create an inline function for clflush(), with the proper arguments,
      and use it instead of hard-coding the instruction.
      
      This also removes one instance of hard-coded wbinvd, based on a patch
      by Bauder de Oliveira Costa.
      
      [ tglx: arch/x86 adaptation ]
      
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Glauber de Oliveira Costa <gcosta@redhat.com>
      Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      6619a8fb
    • Kirill Korotaev's avatar
      x86: mark read_crX() asm code as volatile · c1217a75
      Kirill Korotaev authored
      Some gcc versions (I checked at least 4.1.1 from RHEL5 & 4.1.2 from gentoo)
      can generate incorrect code with read_crX()/write_crX() functions mix up,
      due to cached results of read_crX().
      
      The small app for x8664 below compiled with -O2 demonstrates this
      (i686 does the same thing):
      c1217a75
    • Olaf Hering's avatar
      increase AT_VECTOR_SIZE to terminate saved_auxv properly · 4f9a58d7
      Olaf Hering authored
      
      
      include/asm-powerpc/elf.h has 6 entries in ARCH_DLINFO.  fs/binfmt_elf.c
      has 14 unconditional NEW_AUX_ENT entries and 2 conditional NEW_AUX_ENT
      entries.  So in the worst case, saved_auxv does not get an AT_NULL entry at
      the end.
      
      The saved_auxv array must be terminated with an AT_NULL entry.  Make the
      size of mm_struct->saved_auxv arch dependend, based on the number of
      ARCH_DLINFO entries.
      Signed-off-by: default avatarOlaf Hering <olh@suse.de>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Jakub Jelinek <jakub@redhat.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4f9a58d7
  4. 12 Oct, 2007 2 commits
    • Nick Piggin's avatar
      x86: optimise barriers · b6c7347f
      Nick Piggin authored
      According to latest memory ordering specification documents from Intel
      and AMD, both manufacturers are committed to in-order loads from
      cacheable memory for the x86 architecture.  Hence, smp_rmb() may be a
      simple barrier.
      
      Also according to those documents, and according to existing practice in
      Linux (eg.  spin_unlock doesn't enforce ordering), stores to cacheable
      memory are visible in program order too.  Special string stores are safe
      -- their constituent stores may be out of order, but they must complete
      in order WRT surrounding stores.  Nontemporal stores to WB memory can go
      out of order, and so they should be fenced explicitly to make them
      appear in-order WRT other stores.  Hence, smp_wmb() may be a simple
      barrier.
      
          http://developer.intel.com/products/processor/manuals/318147.pdf
          http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/24593.pdf
      
      
      
      In userspace microbenchmarks on a core2 system, fence instructions range
      anywhere from around 15 cycles to 50, which may not be totally
      insignificant in performance critical paths (code size will go down
      too).
      
      However the primary motivation for this is to have the canonical barrier
      implementation for x86 architecture.
      
      smp_rmb on buggy pentium pros remains a locked op, which is apparently
      required.
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b6c7347f
    • Nick Piggin's avatar
      x86: fix IO write barrier · 4071c718
      Nick Piggin authored
      
      
      wmb() on x86 must always include a barrier, because stores can go out of
      order in many cases when dealing with devices (eg. WC memory).
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4071c718
  5. 11 Oct, 2007 1 commit
  6. 29 Sep, 2007 1 commit
    • Nick Piggin's avatar
      i386: remove bogus comment about memory barrier · 4827bbb0
      Nick Piggin authored
      
      
      The comment being removed by this patch is incorrect and misleading.
      
      In the following situation:
      
      	1. load  ...
      	2. store 1 -> X
      	3. wmb
      	4. rmb
      	5. load  a <- Y
      	6. store ...
      
      4 will only ensure ordering of 1 with 5.
      3 will only ensure ordering of 2 with 6.
      
      Further, a CPU with strictly in-order stores will still only provide that
      2 and 6 are ordered (effectively, it is the same as a weakly ordered CPU
      with wmb after every store).
      
      In all cases, 5 may still be executed before 2 is visible to other CPUs!
      
      The additional piece of the puzzle that mb() provides is the store/load
      ordering, which fundamentally cannot be achieved with any combination of
      rmb()s and wmb()s.
      
      This can be an unexpected result if one expected any sort of global ordering
      guarantee to barriers (eg. that the barriers themselves are sequentially
      consistent with other types of barriers).  However sfence or lfence barriers
      need only provide an ordering partial ordering of memory operations -- Consider
      that wmb may be implemented as nothing more than inserting a special barrier
      entry in the store queue, or, in the case of x86, it can be a noop as the store
      queue is in order. And an rmb may be implemented as a directive to prevent
      subsequent loads only so long as their are no previous outstanding loads (while
      there could be stores still in store queues).
      
      I can actually see the occasional load/store being reordered around lfence on
      my core2. That doesn't prove my above assertions, but it does show the comment
      is wrong (unless my program is -- can send it out by request).
      
      So:
         mb() and smp_mb() always have and always will require a full mfence
         or lock prefixed instruction on x86.  And we should remove this comment.
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Cc: Paul McKenney <paulmck@us.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4827bbb0
  7. 19 Jul, 2007 1 commit
  8. 08 May, 2007 3 commits
  9. 02 May, 2007 1 commit
    • Rusty Russell's avatar
      [PATCH] i386: rationalize paravirt wrappers · 90a0a06a
      Rusty Russell authored
      
      
      paravirt.c used to implement native versions of all low-level
      functions.  Far cleaner is to have the native versions exposed in the
      headers and as inline native_XXX, and if !CONFIG_PARAVIRT, then simply
      #define XXX native_XXX.
      
      There are several nice side effects:
      
      1) write_dt_entry() now takes the correct "struct Xgt_desc_struct *"
         not "void *".
      
      2) load_TLS is reintroduced to the for loop, not manually unrolled
         with a #error in case the bounds ever change.
      
      3) Macros become inlines, with type checking.
      
      4) Access to the native versions is trivial for KVM, lguest, Xen and
         others who might want it.
      Signed-off-by: default avatarJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: default avatarRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Avi Kivity <avi@qumranet.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      90a0a06a
  10. 06 Dec, 2006 1 commit
    • Rusty Russell's avatar
      [PATCH] paravirt: header and stubs for paravirtualisation · d3561b7f
      Rusty Russell authored
      
      
      Create a paravirt.h header for all the critical operations which need to be
      replaced with hypervisor calls, and include that instead of defining native
      operations, when CONFIG_PARAVIRT.
      
      This patch does the dumbest possible replacement of paravirtualized
      instructions: calls through a "paravirt_ops" structure.  Currently these are
      function implementations of native hardware: hypervisors will override the ops
      structure with their own variants.
      
      All the pv-ops functions are declared "fastcall" so that a specific
      register-based ABI is used, to make inlining assember easier.
      
      And:
      
      +From: Andy Whitcroft <apw@shadowen.org>
      
      The paravirt ops introduce a 'weak' attribute onto memory_setup().
      Code ordering leads to the following warnings on x86:
      
          arch/i386/kernel/setup.c:651: warning: weak declaration of
                      `memory_setup' after first use results in unspecified behavior
      
      Move memory_setup() to avoid this.
      Signed-off-by: default avatarRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: default avatarChris Wright <chrisw@sous-sol.org>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Zachary Amsden <zach@vmware.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarAndy Whitcroft <apw@shadowen.org>
      d3561b7f
  11. 26 Sep, 2006 1 commit
  12. 18 Sep, 2006 1 commit
  13. 14 Jul, 2006 1 commit
    • Steven Rostedt's avatar
      [PATCH] remove set_wmb - arch removal · 52393ccc
      Steven Rostedt authored
      
      
      set_wmb should not be used in the kernel because it just confuses the
      code more and has no benefit.  Since it is not currently used in the
      kernel this patch removes it so that new code does not include it.
      
      All archs define set_wmb(var, value) to do { var = value; wmb(); }
      while(0) except ia64 and sparc which use a mb() instead.  But this is
      still moot since it is not used anyway.
      
      Hasn't been tested on any archs but x86 and x86_64 (and only compiled
      tested)
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      52393ccc
  14. 13 Jul, 2006 1 commit
  15. 03 Jul, 2006 1 commit
  16. 26 Jun, 2006 1 commit
  17. 26 Apr, 2006 1 commit
  18. 24 Mar, 2006 1 commit
  19. 23 Mar, 2006 1 commit
    • Gerd Hoffmann's avatar
      [PATCH] x86: SMP alternatives · 9a0b5817
      Gerd Hoffmann authored
      
      
      Implement SMP alternatives, i.e.  switching at runtime between different
      code versions for UP and SMP.  The code can patch both SMP->UP and UP->SMP.
      The UP->SMP case is useful for CPU hotplug.
      
      With CONFIG_CPU_HOTPLUG enabled the code switches to UP at boot time and
      when the number of CPUs goes down to 1, and switches to SMP when the number
      of CPUs goes up to 2.
      
      Without CONFIG_CPU_HOTPLUG or on non-SMP-capable systems the code is
      patched once at boot time (if needed) and the tables are released
      afterwards.
      
      The changes in detail:
      
        * The current alternatives bits are moved to a separate file,
          the SMP alternatives code is added there.
      
        * The patch adds some new elf sections to the kernel:
          .smp_altinstructions
      	like .altinstructions, also contains a list
      	of alt_instr structs.
          .smp_altinstr_replacement
      	like .altinstr_replacement, but also has some space to
      	save original instruction before replaving it.
          .smp_locks
      	list of pointers to lock prefixes which can be nop'ed
      	out on UP.
          The first two are used to replace more complex instruction
          sequences such as spinlocks and semaphores.  It would be possible
          to deal with the lock prefixes with that as well, but by handling
          them as special case the table sizes become much smaller.
      
       * The sections are page-aligned and padded up to page size, so they
         can be free if they are not needed.
      
       * Splitted the code to release init pages to a separate function and
         use it to release the elf sections if they are unused.
      Signed-off-by: default avatarGerd Hoffmann <kraxel@suse.de>
      Signed-off-by: default avatarChuck Ebbert <76306.1226@compuserve.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      9a0b5817
  20. 05 Feb, 2006 1 commit
  21. 12 Jan, 2006 1 commit
  22. 06 Jan, 2006 3 commits
  23. 13 Nov, 2005 1 commit
  24. 30 Oct, 2005 1 commit
  25. 05 Sep, 2005 3 commits
    • Zachary Amsden's avatar
      [PATCH] x86: make IOPL explicit · a5201129
      Zachary Amsden authored
      
      
      The pushf/popf in switch_to are ONLY used to switch IOPL.  Making this
      explicit in C code is more clear.  This pushf/popf pair was added as a
      bugfix for leaking IOPL to unprivileged processes when using
      sysenter/sysexit based system calls (sysexit does not restore flags).
      
      When requesting an IOPL change in sys_iopl(), it is just as easy to change
      the current flags and the flags in the stack image (in case an IRET is
      required), but there is no reason to force an IRET if we came in from the
      SYSENTER path.
      
      This change is the minimal solution for supporting a paravirtualized Linux
      kernel that allows user processes to run with I/O privilege.  Other
      solutions require radical rewrites of part of the low level fault / system
      call handling code, or do not fully support sysenter based system calls.
      
      Unfortunately, this added one field to the thread_struct.  But as a bonus,
      on P4, the fastest time measured for switch_to() went from 312 to 260
      cycles, a win of about 17% in the fast case through this performance
      critical path.
      Signed-off-by: default avatarZachary Amsden <zach@vmware.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      a5201129
    • Zachary Amsden's avatar
      [PATCH] i386: inline assembler: cleanup and encapsulate descriptor and task register management · 4d37e7e3
      Zachary Amsden authored
      
      
      i386 inline assembler cleanup.
      
      This change encapsulates descriptor and task register management.  Also,
      it is possible to improve assembler generation in two cases; savesegment
      may store the value in a register instead of a memory location, which
      allows GCC to optimize stack variables into registers, and MOV MEM, SEG
      is always a 16-bit write to memory, making the casting in math-emu
      unnecessary.
      Signed-off-by: default avatarZachary Amsden <zach@vmware.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      4d37e7e3
    • Zachary Amsden's avatar
      [PATCH] i386: inline asm cleanup · 4bb0d3ec
      Zachary Amsden authored
      
      
      i386 Inline asm cleanup.  Use cr/dr accessor functions.
      
      Also, a potential bugfix.  Also, some CR accessors really should be volatile.
      Reads from CR0 (numeric state may change in an exception handler), writes to
      CR4 (flipping CR4.TSD) and reads from CR2 (page fault) prevent instruction
      re-ordering.  I did not add memory clobber to CR3 / CR4 / CR0 updates, as it
      was not there to begin with, and in no case should kernel memory be clobbered,
      except when doing a TLB flush, which already has memory clobber.
      
      I noticed that page invalidation does not have a memory clobber.  I can't find
      a bug as a result, but there is definitely a potential for a bug here:
      
      #define __flush_tlb_single(addr) \
      	__asm__ __volatile__("invlpg %0": :"m" (*(char *) addr))
      Signed-off-by: default avatarZachary Amsden <zach@vmware.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      4bb0d3ec
  26. 01 May, 2005 1 commit
    • H. J. Lu's avatar
      [PATCH] i386/x86_64 segment register access update · fd51f666
      H. J. Lu authored
      
      
      The new i386/x86_64 assemblers no longer accept instructions for moving
      between a segment register and a 32bit memory location, i.e.,
      
              movl (%eax),%ds
              movl %ds,(%eax)
      
      To generate instructions for moving between a segment register and a
      16bit memory location without the 16bit operand size prefix, 0x66,
      
              mov (%eax),%ds
              mov %ds,(%eax)
      
      should be used. It will work with both new and old assemblers. The
      assembler starting from 2.16.90.0.1 will also support
      
              movw (%eax),%ds
              movw %ds,(%eax)
      
      without the 0x66 prefix. I am enclosing patches for 2.4 and 2.6 kernels
      here. The resulting kernel binaries should be unchanged as before, with
      old and new assemblers, if gcc never generates memory access for
      
                     unsigned gsindex;
                     asm volatile("movl %%gs,%0" : "=g" (gsindex));
      
      If gcc does generate memory access for the code above, the upper bits
      in gsindex are undefined and the new assembler doesn't allow it.
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      fd51f666
  27. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4