1. 15 Aug, 2009 1 commit
    • Russell King's avatar
      ARM: Fix broken highmem support · dde5828f
      Russell King authored
      Currently, highmem is selectable, and you can request an increased
      vmalloc area.  However, none of this has any effect on the memory
      layout since a patch in the highmem series was accidentally dropped.
      Moreover, even if you did want highmem, all memory would still be
      registered as lowmem, possibly resulting in overflow of the available
      virtual mapping space.
      
      The highmem boundary is determined by the highest allowed beginning
      of the vmalloc area, which depends on its configurable minimum size
      (see commit 60296c71 for details on
      this).
      
      We should create mappings and initialize bootmem only for low memory,
      while the zone allocator must still be told about highmem.
      
      Currently, memory nodes which are completely located in high memory
      are not supported.  This is not a huge limitation since systems
      relying on highmem support are unlikely to have discontiguous memory
      with large holes.
      
      [ A similar patch was meant to be merged before commit 5f0fbf9e
      
      
        and be available  in Linux v2.6.30, however some git rebase screw-up
        of mine dropped the first commit of the series, and that goofage
        escaped testing somehow as well. -- Nico ]
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Reviewed-by: default avatarNicolas Pitre <nico@marvell.com>
      dde5828f
  2. 27 Jul, 2009 1 commit
    • Benjamin Herrenschmidt's avatar
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() · 9e1b32ca
      Benjamin Herrenschmidt authored
      
      
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()
      
      Upcoming paches to support the new 64-bit "BookE" powerpc architecture
      will need to have the virtual address corresponding to PTE page when
      freeing it, due to the way the HW table walker works.
      
      Basically, the TLB can be loaded with "large" pages that cover the whole
      virtual space (well, sort-of, half of it actually) represented by a PTE
      page, and which contain an "indirect" bit indicating that this TLB entry
      RPN points to an array of PTEs from which the TLB can then create direct
      entries. Thus, in order to invalidate those when PTE pages are deleted,
      we need the virtual address to pass to tlbilx or tlbivax instructions.
      
      The old trick of sticking it somewhere in the PTE page struct page sucks
      too much, the address is almost readily available in all call sites and
      almost everybody implemets these as macros, so we may as well add the
      argument everywhere. I added it to the pmd and pud variants for consistency.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV]
      Acked-by: default avatarNick Piggin <npiggin@suse.de>
      Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9e1b32ca
  3. 25 Jul, 2009 1 commit
  4. 10 Jul, 2009 1 commit
  5. 05 Jul, 2009 2 commits
  6. 25 Jun, 2009 1 commit
  7. 20 Jun, 2009 1 commit
  8. 17 Jun, 2009 1 commit
  9. 14 Jun, 2009 1 commit
  10. 12 Jun, 2009 1 commit
  11. 11 Jun, 2009 5 commits
  12. 08 Jun, 2009 1 commit
  13. 02 Jun, 2009 1 commit
  14. 30 May, 2009 4 commits
  15. 29 May, 2009 2 commits
    • Nicolas Pitre's avatar
      [ARM] allow for alternative __copy_to_user/__clear_user implementations · a1f98849
      Nicolas Pitre authored
      
      
      This allows for optional alternative implementations of __copy_to_user
      and __clear_user, with a possible runtime fallback to the standard
      version when the alternative provides no gain over that standard
      version. This is done by making the standard __copy_to_user into a weak
      alias for the symbol __copy_to_user_std.  Same thing for __clear_user.
      
      Those two functions are particularly good candidates to have alternative
      implementations for, since they rely on the STRT instruction which has
      lower performances than STM instructions on some CPU cores such as
      the ARM1176 and Marvell Feroceon.
      Signed-off-by: default avatarNicolas Pitre <nico@marvell.com>
      a1f98849
    • Oskar Schirmer's avatar
      flat: fix data sections alignment · c3dc5bec
      Oskar Schirmer authored
      
      
      The flat loader uses an architecture's flat_stack_align() to align the
      stack but assumes word-alignment is enough for the data sections.
      
      However, on the Xtensa S6000 we have registers up to 128bit width
      which can be used from userspace and therefor need userspace stack and
      data-section alignment of at least this size.
      
      This patch drops flat_stack_align() and uses the same alignment that
      is required for slab caches, ARCH_SLAB_MINALIGN, or wordsize if it's
      not defined by the architecture.
      
      It also fixes m32r which was obviously kaput, aligning an
      uninitialized stack entry instead of the stack pointer.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarOskar Schirmer <os@emlix.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Bryan Wu <cooloney@kernel.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: default avatarPaul Mundt <lethal@linux-sh.org>
      Cc: Greg Ungerer <gerg@uclinux.org>
      Signed-off-by: default avatarJohannes Weiner <jw@emlix.com>
      Acked-by: default avatarMike Frysinger <vapier.adi@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c3dc5bec
  16. 28 May, 2009 3 commits
    • Mathieu Desnoyers's avatar
      [ARM] Add cmpxchg support for ARMv6+ systems (v5) · ecd322c9
      Mathieu Desnoyers authored
      
      
      Add cmpxchg/cmpxchg64 support for ARMv6K and ARMv7 systems
      (original patch from Catalin Marinas <catalin.marinas@arm.com>)
      
      The cmpxchg and cmpxchg64 functions can be implemented using the
      LDREX*/STREX* instructions. Since operand lengths other than 32bit are
      required, the full implementations are only available if the ARMv6K
      extensions are present (for the LDREXB, LDREXH and LDREXD instructions).
      
      For ARMv6, only 32-bits cmpxchg is available.
      
      Mathieu :
      
      Make cmpxchg_local always available with best implementation for all type sizes (1, 2, 4 bytes).
      Make cmpxchg64_local always available.
      
      Use "Ir" constraint for "old" operand, like atomic.h atomic_cmpxchg does.
      
      Change since v3 :
      - Add "memory" clobbers (thanks to Nicolas Pitre)
      - removed __asmeq(), only needed for old compilers, very unlikely on ARMv6+.
      
      Note : ARMv7-M should eventually be ifdefed-out of cmpxchg64. But it's not
      supported by the Linux kernel currently.
      
      Put back arm < v6 cmpxchg support.
      Signed-off-by: default avatarMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      CC: Catalin Marinas <catalin.marinas@arm.com>
      CC: Nicolas Pitre <nico@cam.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      ecd322c9
    • Russell King's avatar
      [ARM] barriers: improve xchg, bitops and atomic SMP barriers · bac4e960
      Russell King authored
      
      
      Mathieu Desnoyers pointed out that the ARM barriers were lacking:
      
      - cmpxchg, xchg and atomic add return need memory barriers on
        architectures which can reorder the relative order in which memory
        read/writes can be seen between CPUs, which seems to include recent
        ARM architectures. Those barriers are currently missing on ARM.
      
      - test_and_xxx_bit were missing SMP barriers.
      
      So put these barriers in.  Provide separate atomic_add/atomic_sub
      operations which do not require barriers.
      Reported-Reviewed-and-Acked-by: default avatarMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      bac4e960
    • Russell King's avatar
      [ARM] smp: use new cpumask functions · e03cdade
      Russell King authored
      
      
      Convert cpu_*_mask bit twiddling to the new set_cpu_*() API.
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      e03cdade
  17. 18 May, 2009 2 commits
    • Hiroshi DOYU's avatar
      omap iommu: simple virtual address space management · 69d3a84a
      Hiroshi DOYU authored
      
      
      This patch provides a device drivers, which has a omap iommu, with
      address mapping APIs between device virtual address(iommu), physical
      address and MPU virtual address.
      
      There are 4 possible patterns for iommu virtual address(iova/da) mapping.
      
          |iova/			  mapping		iommu_		page
          | da	pa	va	(d)-(p)-(v)		function	type
        ---------------------------------------------------------------------------
        1 | c		c	c	 1 - 1 - 1	  _kmap() / _kunmap()	s
        2 | c		c,a	c	 1 - 1 - 1	_kmalloc()/ _kfree()	s
        3 | c		d	c	 1 - n - 1	  _vmap() / _vunmap()	s
        4 | c		d,a	c	 1 - n - 1	_vmalloc()/ _vfree()	n*
      
          'iova':	device iommu virtual address
          'da':	alias of 'iova'
          'pa':	physical address
          'va':	mpu virtual address
      
          'c':	contiguous memory area
          'd':	dicontiguous memory area
          'a':	anonymous memory allocation
          '()':	optional feature
      
          'n':	a normal page(4KB) size is used.
          's':	multiple iommu superpage(16MB, 1MB, 64KB, 4KB) size is used.
      
          '*':	not yet, but feasible.
      Signed-off-by: default avatarHiroshi DOYU <Hiroshi.DOYU@nokia.com>
      69d3a84a
    • Ben Dooks's avatar
      [ARM] S3C64XX: DMA support · fa7a7883
      Ben Dooks authored
      
      
      Add support for the DMA blocks in the S3C64XX series of CPUS,
      which are based on the ARM PL080 PrimeCell system.
      
      Unfortunately, these DMA controllers diverge from the PL080
      design by adding another DMA controller register and
      configuration for OneNAND.
      Signed-off-by: default avatarBen Dooks <ben@simtec.co.uk>
      Signed-off-by: default avatarBen Dooks <ben-linux@fluff.org>
      fa7a7883
  18. 17 May, 2009 5 commits
  19. 07 May, 2009 2 commits
    • Paul Gortmaker's avatar
      [ARM] 5507/1: support R_ARM_MOVW_ABS_NC and MOVT_ABS relocation types · ae51e609
      Paul Gortmaker authored
      
      
      From: Bruce Ashfield <bruce.ashfield@windriver.com>
      
      To fully support the armv7-a instruction set/optimizations, support
      for the R_ARM_MOVW_ABS_NC and R_ARM_MOVT_ABS relocation types is
      required.
      
      The MOVW and MOVT are both load-immediate instructions, MOVW loads 16
      bits into the bottom half of a register, and MOVT loads 16 bits into the
      top half of a register.
      
      The relocation information for these instructions has a full 32 bit
      value, plus an addend which is stored in the 16 immediate bits in the
      instruction itself.  The immediate bits in the instruction are not
      contiguous (the register # splits it into a 4 bit and 12 bit value),
      so the addend has to be extracted accordingly and added to the value.
      The value is then split and put into the instruction; a MOVW uses the
      bottom 16 bits of the value, and a MOVT uses the top 16 bits.
      Signed-off-by: default avatarDavid Borman <david.borman@windriver.com>
      Signed-off-by: default avatarBruce Ashfield <bruce.ashfield@windriver.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      ae51e609
    • Ben Dooks's avatar
      [ARM] VIC: Add power management device · c07f87f2
      Ben Dooks authored
      
      
      Add power management support to the VIC by registering
      each VIC as a system device to get suspend/resume
      events going.
      
      Since the VIC registeration is done early, we need to
      record the VICs in a static array which is used to add
      the system devices later once the initcalls are run. This
      means there is now a configuration value for the number
      of VICs in the system.
      Signed-off-by: default avatarBen Dooks <ben-linux@fluff.org>
      c07f87f2
  20. 26 Apr, 2009 1 commit
  21. 20 Apr, 2009 1 commit
    • Mikael Pettersson's avatar
      [ARM] 5456/1: add sys_preadv and sys_pwritev · eb8f3142
      Mikael Pettersson authored
      
      
      Kernel 2.6.30-rc1 added sys_preadv and sys_pwritev to most archs
      but not ARM, resulting in
      
      <stdin>:1421:2: warning: #warning syscall preadv not implemented
      <stdin>:1425:2: warning: #warning syscall pwritev not implemented
      
      This patch adds sys_preadv and sys_pwritev to ARM.
      
      These syscalls simply take five long-sized parameters, so they
      should have no calling-convention/ABI issues in the kernel.
      
      Tested on armv5tel eabi using a preadv/pwritev test program posted
      on linuxppc-dev earlier this month.
      
      It would be nice to get this into the kernel before 2.6.30 final,
      so that glibc's kernel version feature test for these syscalls
      doesn't have to special-case ARM.
      Signed-off-by: default avatarMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      eb8f3142
  22. 15 Apr, 2009 1 commit
    • Aaro Koskinen's avatar
      [ARM] 5450/1: Flush only the needed range when unmapping a VMA · 7fccfc00
      Aaro Koskinen authored
      
      
      When unmapping N pages (e.g. shared memory) the amount of TLB flushes
      done can be (N*PAGE_SIZE/ZAP_BLOCK_SIZE)*N although it should be N at
      maximum. With PREEMPT kernel ZAP_BLOCK_SIZE is 8 pages, so there is a
      noticeable performance penalty when unmapping a large VMA and the system
      is spending its time in flush_tlb_range().
      
      The problem is that tlb_end_vma() is always flushing the full VMA
      range. The subrange that needs to be flushed can be calculated by
      tlb_remove_tlb_entry(). This approach was suggested by Hugh Dickins,
      and is also used by other arches.
      
      The speed increase is roughly 3x for 8M mappings and for larger mappings
      even more.
      Signed-off-by: default avatarAaro Koskinen <Aaro.Koskinen@nokia.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      7fccfc00
  23. 08 Apr, 2009 1 commit