1. 24 Mar, 2012 1 commit
  2. 06 Dec, 2011 1 commit
  3. 09 Mar, 2011 1 commit
  4. 23 Feb, 2011 1 commit
    • Will Deacon's avatar
      ARM: 6668/1: ptrace: remove single-step emulation code · 425fc47a
      Will Deacon authored
      
      
      PTRACE_SINGLESTEP is a ptrace request designed to offer single-stepping
      support to userspace when the underlying architecture has hardware
      support for this operation.
      
      On ARM, we set arch_has_single_step() to 1 and attempt to emulate hardware
      single-stepping by disassembling the current instruction to determine the
      next pc and placing a software breakpoint on that location.
      
      Unfortunately this has the following problems:
      
      1.) Only a subset of ARMv7 instructions are supported
      2.) Thumb-2 is unsupported
      3.) The code is not SMP safe
      
      We could try to fix this code, but it turns out that because of the above
      issues it is rarely used in practice.  GDB, for example, uses PTRACE_POKETEXT
      and PTRACE_PEEKTEXT to manage breakpoints itself and does not require any
      kernel assistance.
      
      This patch removes the single-step emulation code from ptrace meaning that
      the PTRACE_SINGLESTEP request will return -EIO on ARM. Portable code must
      check the return value from a ptrace call and handle the failure gracefully.
      Acked-by: default avatarNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      425fc47a
  5. 08 Sep, 2010 1 commit
  6. 01 Jul, 2010 1 commit
    • Will Deacon's avatar
      ARM: 6194/1: change definition of cpu_relax() for ARM11MPCore · 534be1d5
      Will Deacon authored
      
      
      Linux expects that if a CPU modifies a memory location, then that
      modification will eventually become visible to other CPUs in the system.
      
      On an ARM11MPCore processor, loads are prioritised over stores so it is
      possible for a store operation to be postponed if a polling loop immediately
      follows it. If the variable being polled indirectly depends on the outstanding
      store [for example, another CPU may be polling the variable that is pending
      modification] then there is the potential for deadlock if interrupts are
      disabled. This deadlock occurs in the KGDB testsuire when executing on an
      SMP ARM11MPCore configuration.
      
      This patch changes the definition of cpu_relax() to smp_mb() for ARMv6 cores,
      forcing a flushing of the write buffer on SMP systems before the next load
      takes place. If the Kernel is not compiled for SMP support, this will expand
      to a barrier() as before.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      534be1d5
  7. 30 May, 2009 1 commit
    • Catalin Marinas's avatar
      Add core support for ARMv6/v7 big-endian · 26584853
      Catalin Marinas authored
      
      
      Starting with ARMv6, the CPUs support the BE-8 variant of big-endian
      (byte-invariant). This patch adds the core support:
      
      - setting of the BE-8 mode via the CPSR.E register for both kernel and
        user threads
      - big-endian page table walking
      - REV used to rotate instructions read from memory during fault
        processing as they are still little-endian format
      - Kconfig and Makefile support for BE-8. The --be8 option must be passed
        to the final linking stage to convert the instructions to
        little-endian
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      26584853
  8. 06 Dec, 2008 1 commit
    • Lennert Buytenhek's avatar
      [ARM] 5340/1: fix stack placement after noexecstack changes · 794baba6
      Lennert Buytenhek authored
      Commit 8ec53663
      
       ("[ARM] Improve
      non-executable support") added support for detecting non-executable
      stack binaries.  One of the things it does is to make READ_IMPLIES_EXEC
      be set in ->personality if we are running on a CPU that doesn't support
      the XN ("Execute Never") page table bit or if we are running a binary
      that needs an executable stack.
      
      This exposed a latent bug in ARM's asm/processor.h due to which we'll
      end up placing the stack at a very low address, where it will bump into
      the heap on any application that uses significant amount of stack or
      heap or both, causing many interesting crashes.
      
      Fix this by testing the ADDR_LIMIT_32BIT bit in ->personality instead
      of testing for equality against PER_LINUX_32BIT.
      Reviewed-by: default avatarNicolas Pitre <nico@marvell.com>
      Signed-off-by: default avatarLennert Buytenhek <buytenh@marvell.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      794baba6
  9. 27 Nov, 2008 1 commit
    • Russell King's avatar
      [ARM] remove memzero() · 59f0cb0f
      Russell King authored
      
      
      As suggested by Andrew Morton, remove memzero() - it's not supported
      on other architectures so use of it is a potential build breaking bug.
      Since the compiler optimizes memset(x,0,n) to __memzero() perfectly
      well, we don't miss out on the underlying benefits of memzero().
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      59f0cb0f
  10. 16 Aug, 2008 1 commit
    • Nicolas Pitre's avatar
      [ARM] 5196/1: fix inline asm constraints for preload · 16f719de
      Nicolas Pitre authored
      
      
      With gcc 4.3 and later, a pointer that has already been dereferenced is
      assumed not to be null since it should have caused a segmentation fault
      otherwise, hence any subsequent test against NULL is optimized away.
      
      Current inline asm constraint used in the implementation of prefetch()
      makes gcc believe that the pointer is dereferenced even though the PLD
      instruction does not load any data and does not cause a segmentation
      fault on null pointers, which causes all sorts of interesting results
      when reaching the end of a linked lists for example.
      
      Let's use a better constraint to properly represent the actual usage of
      the pointer value.
      
      Problem reported by Chris Steel.
      Signed-off-by: default avatarNicolas Pitre <nico@marvell.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      16f719de
  11. 02 Aug, 2008 1 commit
  12. 08 Feb, 2008 1 commit
  13. 13 Dec, 2006 1 commit
    • Nicolas Pitre's avatar
      [ARM] 4016/1: prefetch macro is wrong wrt gcc's "delete-null-pointer-checks" · 02828845
      Nicolas Pitre authored
      
       optimization
      
      The gcc manual says:
      
      |`-fdelete-null-pointer-checks'
      |     Use global dataflow analysis to identify and eliminate useless
      |     checks for null pointers.  The compiler assumes that dereferencing
      |     a null pointer would have halted the program.  If a pointer is
      |     checked after it has already been dereferenced, it cannot be null.
      |     Enabled at levels `-O2', `-O3', `-Os'.
      
      Now the problem can be seen with this test case:
      
      #include <linux/prefetch.h>
      extern void bar(char *x);
      void foo(char *x)
      {
      	prefetch(x);
      	if (x)
      		bar(x);
      }
      
      Because the constraint to the inline asm used in the prefetch() macro is
      a memory operand, gcc assumes that the asm code does dereference the
      pointer and the delete-null-pointer-checks optimization kicks in.
      Inspection of generated assembly for the above example shows that bar()
      is indeed called unconditionally without any test on the value of x.
      
      Of course in the prefetch case there is no real dereference and it
      cannot be assumed that a null pointer would have been caught at that
      point. This causes kernel oopses with constructs like
      hlist_for_each_entry() where the list's 'next' content is prefetched
      before the pointer is tested against NULL, and only when gcc feels like
      applying this optimization which doesn't happen all the time with more
      complex code.
      
      It appears that the way to prevent delete-null-pointer-checks
      optimization to occur in this case is to make prefetch() into a static
      inline function instead of a macro. At least this is what is done on
      x86_64 where a similar inline asm memory operand is used (I presume they
      would have seen the same problem if it didn't work) and resulting code
      for the above example confirms that.
      
      An alternative would consist of replacing the memory operand by a
      register operand containing the pointer, and use the addressing mode
      explicitly in the asm template. But that would be less optimal than an
      offsettable memory reference.
      Signed-off-by: default avatarNicolas Pitre <nico@cam.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      02828845
  14. 30 Nov, 2006 1 commit
  15. 13 Jan, 2006 1 commit
  16. 12 Jan, 2006 2 commits
  17. 05 May, 2005 1 commit
    • Russell King's avatar
      [PATCH] ARM: Fix kernel stack offset calculations · 4f7a1812
      Russell King authored
      
      
      Various places in the ARM kernel implicitly assumed that kernel
      stacks are always 8K due to hard coded constants.  Replace these
      constants with definitions.
      
      Correct the allowable range of kernel stack pointer values within
      the allocation.  Arrange for the entire kernel stack to be zeroed,
      not just the upper 4K if CONFIG_DEBUG_STACK_USAGE is set.
      Signed-off-by: default avatarRussell King <rmk@arm.linux.org.uk>
      4f7a1812
  18. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4