1. 22 Mar, 2011 2 commits
    • Namhyung Kim's avatar
      vmalloc: remove confusing comment on vwrite() · a42931bf
      Namhyung Kim authored
      KM_USER1 is never used for vwrite() path so the caller doesn't need to
      guarantee it is not used.  Only the caller should guarantee is KM_USER0
      and it is commented already.
      Signed-off-by: default avatarNamhyung Kim <namhyung@gmail.com>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    • Nick Piggin's avatar
      mm: vmap area cache · 89699605
      Nick Piggin authored
      Provide a free area cache for the vmalloc virtual address allocator, based
      on the algorithm used by the user virtual memory allocator.
      This reduces the number of rbtree operations and linear traversals over
      the vmap extents in order to find a free area, by starting off at the last
      point that a free area was found.
      The free area cache is reset if areas are freed behind it, or if we are
      searching for a smaller area or alignment than last time.  So allocation
      patterns are not changed (verified by corner-case and random test cases in
      userspace testing).
      This solves a regression caused by lazy vunmap TLB purging introduced in
      db64fe02 (mm: rewrite vmap layer).  That patch will leave extents in the
      vmap allocator after they are vunmapped, and until a significant number
      accumulate that can be flushed in a single batch.  So in a workload that
      vmalloc/vfree frequently, a chain of extents will build up from
      VMALLOC_START address, which have to be iterated over each time (giving an
      O(n) type of behaviour).
      After this patch, the search will start from where it left off, giving
      closer to an amortized O(1).
      This is verified to solve regressions reported Steven in GFS2, and Avi in
      Hugh's update:
      : I tried out the recent mmotm, and on one machine was fortunate to hit
      : the BUG_ON(first->va_start < addr) which seems to have been stalling
      : your vmap area cache patch ever since May.
      : I can get you addresses etc, I did dump a few out; but once I stared
      : at them, it was easier just to look at the code: and I cannot see how
      : you would be so sure that first->va_start < addr, once you've done
      : that addr = ALIGN(max(...), align) above, if align is over 0x1000
      : (align was 0x8000 or 0x4000 in the cases I hit: ioremaps like Steve).
      : I originally got around it by just changing the
      : 		if (first->va_start < addr) {
      : to
      : 		while (first->va_start < addr) {
      : without thinking about it any further; but that seemed unsatisfactory,
      : why would we want to loop here when we've got another very similar
      : loop just below it?
      : I am never going to admit how long I've spent trying to grasp your
      : "while (n)" rbtree loop just above this, the one with the peculiar
      : 		if (!first && tmp->va_start < addr + size)
      : in.  That's unfamiliar to me, I'm guessing it's designed to save a
      : subsequent rb_next() in a few circumstances (at risk of then setting
      : a wrong cached_hole_size?); but they did appear few to me, and I didn't
      : feel I could sign off something with that in when I don't grasp it,
      : and it seems responsible for extra code and mistaken BUG_ON below it.
      : I've reverted to the familiar rbtree loop that find_vma() does (but
      : with va_end >= addr as you had, to respect the additional guard page):
      : and then (given that cached_hole_size starts out 0) I don't see the
      : need for any complications below it.  If you do want to keep that loop
      : as you had it, please add a comment to explain what it's trying to do,
      : and where addr is relative to first when you emerge from it.
      : Aren't your tests "size <= cached_hole_size" and
      : "addr + size > first->va_start" forgetting the guard page we want
      : before the next area?  I've changed those.
      : I have not changed your many "addr + size - 1 < addr" overflow tests,
      : but have since come to wonder, shouldn't they be "addr + size < addr"
      : tests - won't the vend checks go wrong if addr + size is 0?
      : I have added a few comments - Wolfgang Wander's 2.6.13 description of
      : 1363c3cd
       Avoiding mmap fragmentation
      : helped me a lot, perhaps a pointer to that would be good too.  And I found
      : it easier to understand when I renamed cached_start slightly and moved the
      : overflow label down.
      : This patch would go after your mm-vmap-area-cache.patch in mmotm.
      : Trivially, nobody is going to get that BUG_ON with this patch, and it
      : appears to work fine on my machines; but I have not given it anything like
      : the testing you did on your original, and may have broken all the
      : performance you were aiming for.  Please take a look and test it out
      : integrate with yours if you're satisfied - thanks.
      [akpm@linux-foundation.org: add locking comment]
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Reported-and-tested-by: default avatarSteven Whitehouse <swhiteho@redhat.com>
      Reported-and-tested-by: default avatarAvi Kivity <avi@redhat.com>
      Tested-by: default avatar"Barry J. Marson" <bmarson@redhat.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  2. 13 Jan, 2011 5 commits
  3. 12 Jan, 2011 1 commit
    • Huang Ying's avatar
      ACPI, APEI, Generic Hardware Error Source POLL/IRQ/NMI notification type support · 81e88fdc
      Huang Ying authored
      Generic Hardware Error Source provides a way to report platform
      hardware errors (such as that from chipset). It works in so called
      "Firmware First" mode, that is, hardware errors are reported to
      firmware firstly, then reported to Linux by firmware. This way, some
      non-standard hardware error registers or non-standard hardware link
      can be checked by firmware to produce more valuable hardware error
      information for Linux.
      This patch adds POLL/IRQ/NMI notification types support.
      Because the memory area used to transfer hardware error information
      from BIOS to Linux can be determined only in NMI, IRQ or timer
      handler, but general ioremap can not be used in atomic context, so a
      special version of atomic ioremap is implemented for that.
      Known issue:
      - Error information can not be printed for recoverable errors notified
        via NMI, because printk is not NMI-safe. Will fix this via delay
        printing to IRQ context via irq_work or make printk NMI-safe.
      - adjust printk format per comments.
      Signed-off-by: default avatarHuang Ying <ying.huang@intel.com>
      Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarLen Brown <len.brown@intel.com>
  4. 02 Dec, 2010 1 commit
    • Jeremy Fitzhardinge's avatar
      vmalloc: eagerly clear ptes on vunmap · 64141da5
      Jeremy Fitzhardinge authored
      On stock 2.6.37-rc4, running:
        # mount lilith:/export /mnt/lilith
        # find  /mnt/lilith/ -type f -print0 | xargs -0 file
      crashes the machine fairly quickly under Xen.  Often it results in oops
      messages, but the couple of times I tried just now, it just hung quietly
      and made Xen print some rude messages:
          (XEN) mm.c:2389:d80 Bad type (saw 7400000000000001 != exp
          3000000000000000) for mfn 1d7058 (pfn 18fa7)
          (XEN) mm.c:964:d80 Attempt to create linear p.t. with write perms
          (XEN) mm.c:2389:d80 Bad type (saw 7400000000000010 != exp
          1000000000000000) for mfn 1d2e04 (pfn 1d1fb)
          (XEN) mm.c:2965:d80 Error while pinning mfn 1d2e04
      Which means the domain tried to map a pagetable page RW, which would
      allow it to map arbitrary memory, so Xen stopped it.  This is because
      vm_unmap_ram() left some pages mapped in the vmalloc area after NFS had
      finished with them, and those pages got recycled as pagetable pages
      while still having these RW aliases.
      Removing those mappings immediately removes the Xen-visible aliases, and
      so it has no problem with those pages being reused as pagetable pages.
      Deferring the TLB flush doesn't upset Xen because it can flush the TLB
      itself as needed to maintain its invariants.
      When unmapping a region in the vmalloc space, clear the ptes
      immediately.  There's no point in deferring this because there's no
      amortization benefit.
      The TLBs are left dirty, and they are flushed lazily to amortize the
      cost of the IPIs.
      This specific motivation for this patch is an oops-causing regression
      since 2.6.36 when using NFS under Xen, triggered by the NFS client's use
      of vm_map_ram() introduced in 56e4ebf8
       ("NFS: readdir with vmapped
      pages") .  XFS also uses vm_map_ram() and could cause similar problems.
      Signed-off-by: default avatarJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Bryan Schumaker <bjschuma@netapp.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: Alex Elder <aelder@sgi.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  5. 26 Oct, 2010 3 commits
  6. 02 Oct, 2010 1 commit
  7. 17 Sep, 2010 1 commit
    • Cliff Wickman's avatar
      mm, x86: Saving vmcore with non-lazy freeing of vmas · 3ee48b6a
      Cliff Wickman authored
      During the reading of /proc/vmcore the kernel is doing
      ioremap()/iounmap() repeatedly. And the buildup of un-flushed
      vm_area_struct's is causing a great deal of overhead. (rb_next()
      is chewing up most of that time).
      This solution is to provide function set_iounmap_nonlazy(). It
      causes a subsequent call to iounmap() to immediately purge the
      vma area (with try_purge_vmap_area_lazy()).
      With this patch we have seen the time for writing a 250MB
      compressed dump drop from 71 seconds to 44 seconds.
      Signed-off-by: default avatarCliff Wickman <cpw@sgi.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: kexec@lists.infradead.org
      Cc: <stable@kernel.org>
      LKML-Reference: <E1OwHZ4-0005WK-Tw@eag09.americas.sgi.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
  8. 08 Sep, 2010 1 commit
  9. 09 Aug, 2010 2 commits
  10. 27 Jul, 2010 1 commit
  11. 09 Jul, 2010 1 commit
    • Kenji Kaneshige's avatar
      x86, ioremap: Fix incorrect physical address handling in PAE mode · ffa71f33
      Kenji Kaneshige authored
      Current x86 ioremap() doesn't handle physical address higher than
      32-bit properly in X86_32 PAE mode. When physical address higher than
      32-bit is passed to ioremap(), higher 32-bits in physical address is
      cleared wrongly. Due to this bug, ioremap() can map wrong address to
      linear address space.
      In my case, 64-bit MMIO region was assigned to a PCI device (ioat
      device) on my system. Because of the ioremap()'s bug, wrong physical
      address (instead of MMIO region) was mapped to linear address space.
      Because of this, loading ioatdma driver caused unexpected behavior
      (kernel panic, kernel hangup, ...).
      Signed-off-by: default avatarKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
      LKML-Reference: <4C1AE680.7090408@jp.fujitsu.com>
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
  12. 02 Feb, 2010 2 commits
    • Nick Piggin's avatar
      mm: purge fragmented percpu vmap blocks · 02b709df
      Nick Piggin authored
      Improve handling of fragmented per-CPU vmaps.  We previously don't free
      up per-CPU maps until all its addresses have been used and freed.  So
      fragmented blocks could fill up vmalloc space even if they actually had
      no active vmap regions within them.
      Add some logic to allow all CPUs to have these blocks purged in the case
      of failure to allocate a new vm area, and also put some logic to trim
      such blocks of a current CPU if we hit them in the allocation path (so
      as to avoid a large build up of them).
      Christoph reported some vmap allocation failures when using the per CPU
      vmap APIs in XFS, which cannot be reproduced after this patch and the
      previous bug fix.
      Cc: linux-mm@kvack.org
      Cc: stable@kernel.org
      Tested-by: default avatarChristoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    • Nick Piggin's avatar
      mm: percpu-vmap fix RCU list walking · de560423
      Nick Piggin authored
      RCU list walking of the per-cpu vmap cache was broken.  It did not use
      RCU primitives, and also the union of free_list and rcu_head is
      obviously wrong (because free_list is indeed the list we are RCU
      While we are there, remove a couple of unused fields from an earlier
      These APIs aren't actually used anywhere, because of problems with the
      XFS conversion.  Christoph has now verified that the problems are solved
      with these patches.  Also it is an exported interface, so I think it
      will be good to be merged now (and Christoph wants to get the XFS
      changes into their local tree).
      Cc: stable@kernel.org
      Cc: linux-mm@kvack.org
      Tested-by: default avatarChristoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  13. 21 Jan, 2010 1 commit
    • Yongseok Koh's avatar
      vmalloc: remove BUG_ON due to racy counting of VM_LAZY_FREE · 88f50044
      Yongseok Koh authored
      In free_unmap_area_noflush(), va->flags is marked as VM_LAZY_FREE first, and
      then vmap_lazy_nr is increased atomically.
      But, in __purge_vmap_area_lazy(), while traversing of vmap_are_list, nr
      is counted by checking VM_LAZY_FREE is set to va->flags.  After counting
      the variable nr, kernel reads vmap_lazy_nr atomically and checks a
      BUG_ON condition whether nr is greater than vmap_lazy_nr to prevent
      vmap_lazy_nr from being negative.
      The problem is that, if interrupted right after marking VM_LAZY_FREE,
      increment of vmap_lazy_nr can be delayed.  Consequently, BUG_ON
      condition can be met because nr is counted more than vmap_lazy_nr.
      It is highly probable when vmalloc/vfree are called frequently.  This
      scenario have been verified by adding delay between marking VM_LAZY_FREE
      and increasing vmap_lazy_nr in free_unmap_area_noflush().
      Even the vmap_lazy_nr is for checking high watermark, it never be the
      strict watermark.  Although the BUG_ON condition is to prevent
      vmap_lazy_nr from being negative, vmap_lazy_nr is signed variable.  So,
      it could go down to negative value temporarily.
      Consequently, removing the BUG_ON condition is proper.
      A possible BUG_ON message is like the below.
         kernel BUG at mm/vmalloc.c:517!
         invalid opcode: 0000 [#1] SMP
         EIP: 0060:[<c04824a4>] EFLAGS: 00010297 CPU: 3
         EIP is at __purge_vmap_area_lazy+0x144/0x150
         EAX: ee8a8818 EBX: c08e77d4 ECX: e7c7ae40 EDX: c08e77ec
         ESI: 000081fe EDI: e7c7ae60 EBP: e7c7ae64 ESP: e7c7ae3c
         DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
         Call Trace:
         [<c0482ad9>] free_unmap_vmap_area_noflush+0x69/0x70
         [<c0482b02>] remove_vm_area+0x22/0x70
         [<c0482c15>] __vunmap+0x45/0xe0
         [<c04831ec>] vmalloc+0x2c/0x30
         Code: 8d 59 e0 eb 04 66 90 89 cb 89 d0 e8 87 fe ff ff 8b 43 20 89 da 8d 48 e0 8d 43 20 3b 04 24 75 e7 fe 05 a8 a5 a3 c0 e9 78 ff ff ff <0f> 0b eb fe 90 8d b4 26 00 00 00 00 56 89 c6 b8 ac a5 a3 c0 31
         EIP: [<c04824a4>] __purge_vmap_area_lazy+0x144/0x150 SS:ESP 0068:e7c7ae3c
      [ See also http://marc.info/?l=linux-kernel&m=126335856228090&w=2
      Signed-off-by: default avatarYongseok Koh <yongseok.koh@samsung.com>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  14. 15 Dec, 2009 1 commit
  15. 29 Oct, 2009 1 commit
  16. 11 Oct, 2009 1 commit
  17. 08 Oct, 2009 2 commits
  18. 23 Sep, 2009 1 commit
  19. 22 Sep, 2009 4 commits
  20. 14 Aug, 2009 2 commits
    • Tejun Heo's avatar
      vmalloc: implement pcpu_get_vm_areas() · ca23e405
      Tejun Heo authored
      To directly use spread NUMA memories for percpu units, percpu
      allocator will be updated to allow sparsely mapping units in a chunk.
      As the distances between units can be very large, this makes
      allocating single vmap area for each chunk undesirable.  This patch
      implements pcpu_get_vm_areas() and pcpu_free_vm_areas() which
      allocates and frees sparse congruent vmap areas.
      pcpu_get_vm_areas() take @offsets and @sizes array which define
      distances and sizes of vmap areas.  It scans down from the top of
      vmalloc area looking for the top-most address which can accomodate all
      the areas.  The top-down scan is to avoid interacting with regular
      vmallocs which can push up these congruent areas up little by little
      ending up wasting address space and page table.
      To speed up top-down scan, the highest possible address hint is
      maintained.  Although the scan is linear from the hint, given the
      usual large holes between memory addresses between NUMA nodes, the
      scanning is highly likely to finish after finding the first hole for
      the last unit which is scanned first.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Nick Piggin <npiggin@suse.de>
    • Tejun Heo's avatar
      vmalloc: separate out insert_vmalloc_vm() · cf88c790
      Tejun Heo authored
      Separate out insert_vmalloc_vm() from __get_vm_area_node().
      insert_vmalloc_vm() initializes vm_struct from vmap_area and inserts
      it into vmlist.  insert_vmalloc_vm() only initializes fields which can
      be determined from @vm, @flags and @caller The rest should be
      initialized by the caller.  For __get_vm_area_node(), all other fields
      just need to be cleared and this is done by using kzalloc instead of
      This will be used to implement pcpu_get_vm_areas().
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Nick Piggin <npiggin@suse.de>
  21. 11 Jun, 2009 2 commits
  22. 06 May, 2009 1 commit
  23. 01 Apr, 2009 1 commit
  24. 27 Feb, 2009 2 commits
    • Vegard Nossum's avatar
      mm: fix lazy vmap purging (use-after-free error) · cbb76676
      Vegard Nossum authored
      I just got this new warning from kmemcheck:
          WARNING: kmemcheck: Caught 32-bit read from freed memory (c7806a60)
           f f f f f f f f f f f f f f f f f f f f f f f f f f f f f f f f
          Pid: 0, comm: swapper Not tainted (2.6.29-rc4 #230)
          EIP: 0060:[<c1096df7>] EFLAGS: 00000286 CPU: 0
          EIP is at __purge_vmap_area_lazy+0x117/0x140
          EAX: 00070f43 EBX: c7806a40 ECX: c1677080 EDX: 00027b66
          ESI: 00002001 EDI: c170df0c EBP: c170df00 ESP: c178830c
           DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
          CR0: 80050033 CR2: c7806b14 CR3: 01775000 CR4: 00000690
          DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
          DR6: 00004000 DR7: 00000000
           [<c1096f3e>] free_unmap_vmap_area_noflush+0x6e/0x70
           [<c1096f6a>] remove_vm_area+0x2a/0x70
           [<c1097025>] __vunmap+0x45/0xe0
           [<c10970de>] vunmap+0x1e/0x30
           [<c1008ba5>] text_poke+0x95/0x150
           [<c1008ca9>] alternatives_smp_unlock+0x49/0x60
           [<c171ef47>] alternative_instructions+0x11b/0x124
           [<c171f991>] check_bugs+0xbd/0xdc
           [<c17148c5>] start_kernel+0x2ed/0x360
           [<c171409e>] __init_begin+0x9e/0xa9
           [<ffffffff>] 0xffffffff
      It happened here:
          $ addr2line -e vmlinux -i c1096df7
      	list_for_each_entry(va, &valist, purge_list)
      It's this instruction:
          mov    0x20(%ebx),%edx
      Which corresponds to a dereference of va->purge_list.next:
          (gdb) p ((struct vmap_area *) 0)->purge_list.next
          Cannot access memory at address 0x20
      It seems that we should use "safe" list traversal here, as the element
      is freed inside the loop. Please verify that this is the right fix.
      Acked-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarVegard Nossum <vegard.nossum@gmail.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: <stable@kernel.org>		[2.6.28.x]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    • Nick Piggin's avatar
      mm: vmap fix overflow · 7766970c
      Nick Piggin authored
      The new vmap allocator can wrap the address and get confused in the case
      of large allocations or VMALLOC_END near the end of address space.
      Problem reported by Christoph Hellwig on a 32-bit XFS workload.
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Reported-by: default avatarChristoph Hellwig <hch@lst.de>
      Cc: <stable@kernel.org>		[2.6.28.x]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>