1. 02 Jul, 2009 1 commit
  2. 16 Jun, 2009 1 commit
    • Mark Brown's avatar
      [ARM] S3C64XX: Initial support for DVFS · b3748ddd
      Mark Brown authored
      This patch provides initial support for CPU frequency scaling on the
      Samsung S3C ARM processors. Currently only S3C6410 processors are
      supported, though addition of another data table with supported clock
      rates should be sufficient to enable support for further CPUs.
      Use the regulator framework to provide optional support for DVFS in
      the S3C cpufreq driver. When a software controllable regulator is
      configured the driver will use it to lower the supply voltage when
      running at a lower frequency, giving improved power savings.
      When regulator support is disabled or no regulator can be obtained
      for VDDARM the driver will fall back to scaling only the frequency.
      Signed-off-by: default avatarMark Brown <broonie@opensource.wolfsonmicro.com>
      Signed-off-by: default avatarBen Dooks <ben-linux@fluff.org>
  3. 11 Jun, 2009 2 commits
  4. 09 Jun, 2009 1 commit
  5. 08 Jun, 2009 1 commit
  6. 31 May, 2009 1 commit
  7. 30 May, 2009 3 commits
  8. 29 May, 2009 1 commit
    • Lennert Buytenhek's avatar
      [ARM] alternative copy_to_user/clear_user implementation · 39ec58f3
      Lennert Buytenhek authored
      This implements {copy_to,clear}_user() by faulting in the userland
      pages and then using the regular kernel mem{cpy,set}() to copy the
      data (while holding the page table lock).  This is a win if the regular
      mem{cpy,set}() implementations are faster than the user copy functions,
      which is the case e.g. on Feroceon, where 8-word STMs (which memcpy()
      uses under the right conditions) give significantly higher memory write
      throughput than a sequence of individual 32bit stores.
      Here are numbers for page sized buffers on some Feroceon cores:
       - copy_to_user on Orion5x goes from 51 MB/s to 83 MB/s
       - clear_user on Orion5x goes from 89MB/s to 314MB/s
       - copy_to_user on Kirkwood goes from 240 MB/s to 356 MB/s
       - clear_user on Kirkwood goes from 367 MB/s to 1108 MB/s
       - copy_to_user on Disco-Duo goes from 248 MB/s to 398 MB/s
       - clear_user on Disco-Duo goes from 328 MB/s to 1741 MB/s
      Because the setup cost is non negligible, this is worthwhile only if
      the amount of data to copy is large enough.  The operation falls back
      to the standard implementation when the amount of data is below a certain
      threshold. This threshold was determined empirically, however some targets
      could benefit from a lower runtime determined value for optimal results
      In the copy_from_user() case, this technique does not provide any
      worthwhile performance gain due to the fact that any kind of read access
      allocates the cache and subsequent 32bit loads are just as fast as the
      equivalent 8-word LDM.
      Signed-off-by: default avatarLennert Buytenhek <buytenh@marvell.com>
      Signed-off-by: default avatarNicolas Pitre <nico@marvell.com>
      Tested-by: default avatarMartin Michlmayr <tbm@cyrius.com>
  9. 28 May, 2009 1 commit
    • David Brownell's avatar
      davinci: add SRAM allocator · 20e9969b
      David Brownell authored
      Provide a generic SRAM allocator using genalloc, and vaguely
      modeled after what AVR32 uses.  This builds on top of the
      static CPU mapping set up in the previous patch, and returns
      DMA mappings as requested (if possible).
      Compared to its OMAP cousin, there's no current support for
      (currently non-existent) DaVinci power management code running
      in SRAM; and this has ways to deallocate, instead of being
      The initial user of this should probably be the audio code,
      because EDMA from DDR is subject to various dropouts on at
      least DM355 and DM6446 chips.
      Signed-off-by: default avatarDavid Brownell <dbrownell@users.sourceforge.net>
      Signed-off-by: default avatarKevin Hilman <khilman@deeprootsystems.com>
  10. 18 May, 2009 1 commit
    • Mel Gorman's avatar
      [ARM] Double check memmap is actually valid with a memmap has unexpected holes V2 · eb33575c
      Mel Gorman authored
      pfn_valid() is meant to be able to tell if a given PFN has valid memmap
      associated with it or not. In FLATMEM, it is expected that holes always
      have valid memmap as long as there is valid PFNs either side of the hole.
      In SPARSEMEM, it is assumed that a valid section has a memmap for the
      entire section.
      However, ARM and maybe other embedded architectures in the future free
      memmap backing holes to save memory on the assumption the memmap is never
      used. The page_zone linkages are then broken even though pfn_valid()
      returns true. A walker of the full memmap must then do this additional
      check to ensure the memmap they are looking at is sane by making sure the
      zone and PFN linkages are still valid. This is expensive, but walkers of
      the full memmap are extremely rare.
      This was caught before for FLATMEM and hacked around but it hits again for
      SPARSEMEM because the page_zone linkages can look ok where the PFN linkages
      are totally screwed. This looks like a hatchet job but the reality is that
      any clean solution would end up consumning all the memory saved by punching
      these unexpected holes in the memmap. For example, we tried marking the
      memmap within the section invalid but the section size exceeds the size of
      the hole in most cases so pfn_valid() starts returning false where valid
      memmap exists. Shrinking the size of the section would increase memory
      consumption offsetting the gains.
      This patch identifies when an architecture is punching unexpected holes
      in the memmap that the memory model cannot automatically detect and sets
      ARCH_HAS_HOLES_MEMORYMODEL. At the moment, this is restricted to EP93xx
      which is the model sub-architecture this has been reported on but may expand
      later. When set, walkers of the full memmap must call memmap_valid_within()
      for each PFN and passing in what it expects the page and zone to be for
      that PFN. If it finds the linkages to be broken, it assumes the memmap is
      invalid for that PFN.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
  11. 17 May, 2009 3 commits
  12. 07 May, 2009 1 commit
  13. 05 May, 2009 1 commit
  14. 30 Apr, 2009 4 commits
  15. 28 Apr, 2009 1 commit
  16. 27 Apr, 2009 2 commits
  17. 26 Apr, 2009 2 commits
  18. 23 Apr, 2009 2 commits
  19. 26 Mar, 2009 2 commits
  20. 22 Mar, 2009 5 commits
  21. 15 Mar, 2009 1 commit
    • Nicolas Pitre's avatar
      [ARM] add CONFIG_HIGHMEM option · 053a96ca
      Nicolas Pitre authored
      Here it is... HIGHMEM for the ARM architecture.  :-)
      If you don't have enough ram for highmem pages to be allocated and still
      want to test this, then the cmdline option "vmalloc=" can be used with
      a value large enough to force the highmem threshold down.
      Successfully tested on a Marvell DB-78x00-BP Development Board with
      2 GB of RAM.
      Signed-off-by: default avatarNicolas Pitre <nico@marvell.com>
  22. 21 Feb, 2009 1 commit
  23. 12 Feb, 2009 1 commit
  24. 06 Jan, 2009 1 commit