1. 06 Aug, 2014 1 commit
    • WANG Chao's avatar
      mm/vmalloc.c: clean up map_vm_area third argument · f6f8ed47
      WANG Chao authored
      Currently map_vm_area() takes (struct page *** pages) as third argument,
      and after mapping, it moves (*pages) to point to (*pages +
      It looks like this kind of increment is useless to its caller these
      days.  The callers don't care about the increments and actually they're
      trying to avoid this by passing another copy to map_vm_area().
      The caller can always guarantee all the pages can be mapped into vm_area
      as specified in first argument and the caller only cares about whether
      map_vm_area() fails or not.
      This patch cleans up the pointer movement in map_vm_area() and updates
      its callers accordingly.
      Signed-off-by: default avatarWANG Chao <chaowang@redhat.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Acked-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  2. 04 Jun, 2014 2 commits
  3. 20 Mar, 2014 1 commit
    • Srivatsa S. Bhat's avatar
      zsmalloc: Fix CPU hotplug callback registration · f0e71fcd
      Srivatsa S. Bhat authored
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      Instead, the correct and race-free way of performing the callback
      registration is:
      	/* Note the use of the double underscored version of the API */
      Fix the zsmalloc code by using this latter form of callback registration.
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
  4. 30 Jan, 2014 2 commits
    • Minchan Kim's avatar
      zsmalloc: add copyright · 31fc00bb
      Minchan Kim authored
      Add my copyright to the zsmalloc source code which I maintain.
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    • Minchan Kim's avatar
      zsmalloc: move it under mm · bcf1647d
      Minchan Kim authored
      This patch moves zsmalloc under mm directory.
      Before that, description will explain why we have needed custom
      Zsmalloc is a new slab-based memory allocator for storing compressed
      pages.  It is designed for low fragmentation and high allocation success
      rate on large object, but <= PAGE_SIZE allocations.
      zsmalloc differs from the kernel slab allocator in two primary ways to
      achieve these design goals.
      zsmalloc never requires high order page allocations to back slabs, or
      "size classes" in zsmalloc terms.  Instead it allows multiple
      single-order pages to be stitched together into a "zspage" which backs
      the slab.  This allows for higher allocation success rate under memory
      Also, zsmalloc allows objects to span page boundaries within the zspage.
      This allows for lower fragmentation than could be had with the kernel
      slab allocator for objects between PAGE_SIZE/2 and PAGE_SIZE.  With the
      kernel slab allocator, if a page compresses to 60% of it original size,
      the memory savings gained through compression is lost in fragmentation
      because another object of the same size can't be stored in the leftover
      This ability to span pages results in zsmalloc allocations not being
      directly addressable by the user.  The user is given an
      non-dereferencable handle in response to an allocation request.  That
      handle must be mapped, using zs_map_object(), which returns a pointer to
      the mapped region that can be used.  The mapping is necessary since the
      object data may reside in two different noncontigious pages.
      The zsmalloc fulfills the allocation needs for zram perfectly
      [sjenning@linux.vnet.ibm.com: borrow Seth's quote]
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Acked-by: default avatarNitin Gupta <ngupta@vflare.org>
      Reviewed-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  5. 17 Dec, 2013 1 commit
  6. 10 Dec, 2013 2 commits
  7. 25 Nov, 2013 1 commit
    • Olav Haugan's avatar
      staging: zsmalloc: Ensure handle is never 0 on success · 67296874
      Olav Haugan authored
      zsmalloc encodes a handle using the pfn and an object
      index. On hardware platforms with physical memory starting
      at 0x0 the pfn can be 0. This causes the encoded handle to be
      0 and is incorrectly interpreted as an allocation failure.
      This issue affects all current and future SoCs with physical
      memory starting at 0x0. All MSM8974 SoCs which includes
      Google Nexus 5 devices are affected.
      To prevent this false error we ensure that the encoded handle
      will not be 0 when allocation succeeds.
      Signed-off-by: default avatarOlav Haugan <ohaugan@codeaurora.org>
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
  8. 23 Jul, 2013 1 commit
  9. 21 May, 2013 1 commit
  10. 20 May, 2013 1 commit
  11. 23 Apr, 2013 1 commit
    • Arnd Bergmann's avatar
      staging/zsmalloc: don't use pgtable-mapping from modules · 796ce5a7
      Arnd Bergmann authored
      Building zsmalloc as a module does not work on ARM because it uses
      an interface that is not exported:
      ERROR: "flush_tlb_kernel_range" [drivers/staging/zsmalloc/zsmalloc.ko] undefined!
      Since this is only used as a performance optimization and only on ARM,
      we can avoid the problem simply by not using that optimization when
      building zsmalloc it is a loadable module.
      flush_tlb_kernel_range is often an inline function, but out of the
      architectures that use an extern function, only powerpc exports
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
  12. 28 Mar, 2013 1 commit
  13. 23 Feb, 2013 1 commit
  14. 30 Jan, 2013 1 commit
  15. 29 Jan, 2013 2 commits
    • Minchan Kim's avatar
      staging: zsmalloc: Fix TLB coherency and build problem · 99155188
      Minchan Kim authored
      Recently, Matt Sealey reported he fail to build zsmalloc caused by
      using of local_flush_tlb_kernel_range which are architecture dependent
      function so !CONFIG_SMP in ARM couldn't implement it so it ends up
      build error following as.
        MODPOST 216 modules
        LZMA    arch/arm/boot/compressed/piggy.lzma
        AS      arch/arm/boot/compressed/lib1funcs.o
      ERROR: "v7wbi_flush_kern_tlb_range"
      [drivers/staging/zsmalloc/zsmalloc.ko] undefined!
      make[1]: *** [__modpost] Error 1
      make: *** [modules] Error 2
      make: *** Waiting for unfinished jobs....
      The reason we used that function is copy method by [1]
      was really slow in ARM but at that time.
      More severe problem is ARM can prefetch speculatively on other CPUs
      so under us, other TLBs can have an entry only if we do flush local
      CPU. Russell King pointed that. Thanks!
      We don't have many choices except using flush_tlb_kernel_range.
      My experiment in ARMv7 processor 4 core didn't make any difference with
      zsmapbench[2] between local_flush_tlb_kernel_range and flush_tlb_kernel_range
      but still page-table based is much better than copy-based.
      * bigger is better.
      1. local_flush_tlb_kernel_range: 3918795 mappings
      2. flush_tlb_kernel_range : 3989538 mappings
      3. copy-based: 635158 mappings
      This patch replace local_flush_tlb_kernel_range with
      flush_tlb_kernel_range which are avaialbe in all architectures
      because we already have used it in vmalloc allocator which are
      generic one so build problem should go away and performane loss
      shoud be void.
      [1] f553646a, zsmalloc: add page table mapping method
      [2] https://github.com/spartacus06/zsmapbench
      Cc: stable@vger.kernel.org
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
      Reported-by: default avatarMatt Sealey <matt@genesi-usa.com>
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
    • Seth Jennings's avatar
      staging: zsmalloc: make CLASS_DELTA relative to PAGE_SIZE · d662b8eb
      Seth Jennings authored
      Right now ZS_SIZE_CLASS_DELTA is hardcoded to be 16.  This
      creates 254 classes for systems with 4k pages. However, on
      PPC64 with 64k pages, it creates 4095 classes which is far
      too many.
      This patch makes ZS_SIZE_CLASS_DELTA relative to PAGE_SIZE
      so that regardless of the page size, there will be the same
      number of classes.
      Acked-by: default avatarNitin Gupta <ngupta@vflare.org>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarSeth Jennings <sjenning@linux.vnet.ibm.com>
      Acked-by: default avatarDan Magenheimer <dan.magenheimer@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
  16. 16 Jan, 2013 1 commit
  17. 13 Aug, 2012 4 commits
  18. 09 Jul, 2012 4 commits
  19. 20 Jun, 2012 1 commit
  20. 13 Jun, 2012 1 commit
  21. 11 Jun, 2012 2 commits
  22. 09 May, 2012 2 commits
  23. 25 Apr, 2012 1 commit
  24. 10 Apr, 2012 1 commit
  25. 07 Mar, 2012 2 commits
  26. 13 Feb, 2012 1 commit
  27. 08 Feb, 2012 1 commit