1. 29 Oct, 2005 17 commits
  2. 28 Oct, 2005 3 commits
  3. 26 Oct, 2005 1 commit
  4. 20 Oct, 2005 3 commits
    • Hugh Dickins's avatar
      [PATCH] Fix handling spurious page fault for hugetlb region · ac9b9c66
      Hugh Dickins authored
      This reverts commit 3359b54c
      replaces it with a cleaner version that is purely based on page table
      operations, so that the synchronization between inode size and hugetlb
      mappings becomes moot.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
    • Yasunori Goto's avatar
      [PATCH] swiotlb: make sure initial DMA allocations really are in DMA memory · 281dd25c
      Yasunori Goto authored
      This introduces a limit parameter to the core bootmem allocator; The new
      parameter indicates that physical memory allocated by the bootmem
      allocator should be within the requested limit.
      We also introduce alloc_bootmem_low_pages_limit, alloc_bootmem_node_limit,
      alloc_bootmem_low_pages_node_limit apis, but alloc_bootmem_low_pages_limit
      is the only api used for swiotlb.
      The existing alloc_bootmem_low_pages() api could instead have been
      changed and made to pass right limit to the core allocator.  But that
      would make the patch more intrusive for 2.6.14, as other arches use
      alloc_bootmem_low_pages().  We may be done that post 2.6.14 as a
      With this, swiotlb gets memory within 4G for both x86_64 and ia64
      Signed-off-by: default avatarYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Ravikiran G Thirumalai <kiran@scalex86.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
    • Hugh Dickins's avatar
      [PATCH] mm: hugetlb truncation fixes · 1c59827d
      Hugh Dickins authored
      hugetlbfs allows truncation of its files (should it?), but hugetlb.c often
      forgets that: crashes and misaccounting ensue.
      copy_hugetlb_page_range better grab the src page_table_lock since we don't
      want to guess what happens if concurrently truncated.  unmap_hugepage_range
      rss accounting must not assume the full range was mapped.  follow_hugetlb_page
      must guard with page_table_lock and be prepared to exit early.
      Restyle copy_hugetlb_page_range with a for loop like the others there.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  5. 19 Oct, 2005 1 commit
    • Seth, Rohit's avatar
      [PATCH] Handle spurious page fault for hugetlb region · 3359b54c
      Seth, Rohit authored
      The hugetlb pages are currently pre-faulted.  At the time of mmap of
      hugepages, we populate the new PTEs.  It is possible that HW has already
      cached some of the unused PTEs internally.  These stale entries never
      get a chance to be purged in existing control flow.
      This patch extends the check in page fault code for hugepages.  Check if
      a faulted address falls with in size for the hugetlb file backing it.
      We return VM_FAULT_MINOR for these cases (assuming that the arch
      specific page-faulting code purges the stale entry for the archs that
      need it).
      Signed-off-by: default avatarRohit Seth <rohit.seth@intel.com>
      [ This is apparently arguably an ia64 port bug. But the code won't
        hurt, and for now it fixes a real problem on some ia64 machines ]
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  6. 16 Oct, 2005 1 commit
    • Linus Torvalds's avatar
      Fix memory ordering bug in page reclaim · 3d80636a
      Linus Torvalds authored
      As noticed by Nick Piggin, we need to make sure that we check the page
      count before we check for PageDirty, since the dirty check is only valid
      if the count implies that we're the only possible ones holding the page.
      We always did do this, but the code needs a read-memory-barrier to make
      sure that the orderign is also honored by the CPU.
      (The writer side is ordered due to the atomic decrement and test on the
      page count, see the discussion on linux-kernel)
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  7. 11 Oct, 2005 2 commits
  8. 08 Oct, 2005 1 commit
  9. 30 Sep, 2005 1 commit
  10. 28 Sep, 2005 2 commits
  11. 23 Sep, 2005 1 commit
  12. 22 Sep, 2005 4 commits
    • Rob Landley's avatar
      [PATCH] Fix bd_claim() error code. · f7b3a435
      Rob Landley authored
      Problem: In some circumstances, bd_claim() is returning the wrong error
      If we try to swapon an unused block device that isn't swap formatted, we
      get -EINVAL.  But if that same block device is already mounted, we instead
      get -EBUSY, even though it still isn't a valid swap device.
      This issue came up on the busybox list trying to get the error message
      from "swapon -a" right.  If a swap device is already enabled, we get -EBUSY,
      and we shouldn't report this as an error.  But we can't distinguish the two
      -EBUSY conditions, which are very different errors.
      In the code, bd_claim() returns either 0 or -EBUSY, but in this case busy
      means "somebody other than sys_swapon has already claimed this", and
      _that_ means this block device can't be a valid swap device.  So return
      -EINVAL there.
      Signed-off-by: default avatarRob Landley <rob@landley.net>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
    • Christoph Lameter's avatar
      [PATCH] __kmalloc: Generate BUG if size requested is too large. · eafb4270
      Christoph Lameter authored
      I had an issue on ia64 where I got a bug in kernel/workqueue because
      kzalloc returned a NULL pointer due to the task structure getting too big
      for the slab allocator.  Usually these cases are caught by the kmalloc
      macro in include/linux/slab.h.
      Compilation will fail if a too big value is passed to kmalloc.
      However, kzalloc uses __kmalloc which has no check for that.  This patch
      makes __kmalloc bug if a too large entity is requested.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
    • Christoph Lameter's avatar
      [PATCH] slab: fix handling of pages from foreign NUMA nodes · ff69416e
      Christoph Lameter authored
      The numa slab allocator may allocate pages from foreign nodes onto the
      lists for a particular node if a node runs out of memory.  Inspecting the
      slab->nodeid field will not reflect that the page is now in use for the
      slabs of another node.
      This patch fixes that issue by adding a node field to free_block so that
      the caller can indicate which node currently uses a slab.
      Also removes the check for the current node from kmalloc_cache_node since
      the process may shift later to another node which may lead to an allocation
      on another node than intended.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
    • Ivan Kokshaysky's avatar
      [PATCH] slab: alpha inlining fix · 7243cc05
      Ivan Kokshaysky authored
      It is essential that index_of() be inlined.  But alpha undoes the gcc
      inlining hackery and index_of() ends up out-of-line.  So fiddle with things
      to make that function inline again.
      Cc: Richard Henderson <rth@twiddle.net>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  13. 21 Sep, 2005 2 commits
  14. 17 Sep, 2005 1 commit