1. 16 Oct, 2007 40 commits
    • KAMEZAWA Hiroyuki's avatar
      memory unplug: memory hotplug cleanup · 75884fb1
      KAMEZAWA Hiroyuki authored
      
      
      A clean up patch for "scanning memory resource [start, end)" operation.
      
      Now, find_next_system_ram() function is used in memory hotplug, but this
      interface is not easy to use and codes are complicated.
      
      This patch adds walk_memory_resouce(start,len,arg,func) function.
      The function 'func' is called per valid memory resouce range in [start,pfn).
      
      [pbadari@us.ibm.com: Error handling in walk_memory_resource()]
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarBadari Pulavarty <pbadari@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      75884fb1
    • Mel Gorman's avatar
      Breakout page_order() to internal.h to avoid special knowledge of the buddy allocator · 48f13bf3
      Mel Gorman authored
      
      
      The statistics patch later needs to know what order a free page is on the free
      lists.  Rather than having special knowledge of page_private() when
      PageBuddy() is set, this patch places out page_order() in internal.h and adds
      a VM_BUG_ON to catch using it on non-PageBuddy pages.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      48f13bf3
    • Andrew Morton's avatar
      slub: list_locations() can use GFP_TEMPORARY · ea3061d2
      Andrew Morton authored
      
      
      It's a short-lived allocation.
      
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ea3061d2
    • Christoph Lameter's avatar
      SLUB: Optimize cacheline use for zeroing · 42a9fdbb
      Christoph Lameter authored
      
      
      We touch a cacheline in the kmem_cache structure for zeroing to get the
      size. However, the hot paths in slab_alloc and slab_free do not reference
      any other fields in kmem_cache, so we may have to just bring in the
      cacheline for this one access.
      
      Add a new field to kmem_cache_cpu that contains the object size. That
      cacheline must already be used in the hotpaths. So we save one cacheline
      on every slab_alloc if we zero.
      
      We need to update the kmem_cache_cpu object size if an aliasing operation
      changes the objsize of an non debug slab.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      42a9fdbb
    • Christoph Lameter's avatar
      SLUB: Place kmem_cache_cpu structures in a NUMA aware way · 4c93c355
      Christoph Lameter authored
      
      
      The kmem_cache_cpu structures introduced are currently an array placed in the
      kmem_cache struct. Meaning the kmem_cache_cpu structures are overwhelmingly
      on the wrong node for systems with a higher amount of nodes. These are
      performance critical structures since the per node information has
      to be touched for every alloc and free in a slab.
      
      In order to place the kmem_cache_cpu structure optimally we put an array
      of pointers to kmem_cache_cpu structs in kmem_cache (similar to SLAB).
      
      However, the kmem_cache_cpu structures can now be allocated in a more
      intelligent way.
      
      We would like to put per cpu structures for the same cpu but different
      slab caches in cachelines together to save space and decrease the cache
      footprint. However, the slab allocators itself control only allocations
      per node. We set up a simple per cpu array for every processor with
      100 per cpu structures which is usually enough to get them all set up right.
      If we run out then we fall back to kmalloc_node. This also solves the
      bootstrap problem since we do not have to use slab allocator functions
      early in boot to get memory for the small per cpu structures.
      
      Pro:
      	- NUMA aware placement improves memory performance
      	- All global structures in struct kmem_cache become readonly
      	- Dense packing of per cpu structures reduces cacheline
      	  footprint in SMP and NUMA.
      	- Potential avoidance of exclusive cacheline fetches
      	  on the free and alloc hotpath since multiple kmem_cache_cpu
      	  structures are in one cacheline. This is particularly important
      	  for the kmalloc array.
      
      Cons:
      	- Additional reference to one read only cacheline (per cpu
      	  array of pointers to kmem_cache_cpu) in both slab_alloc()
      	  and slab_free().
      
      [akinobu.mita@gmail.com: fix cpu hotplug offline/online path]
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Cc: "Pekka Enberg" <penberg@cs.helsinki.fi>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4c93c355
    • Christoph Lameter's avatar
      SLUB: Avoid touching page struct when freeing to per cpu slab · ee3c72a1
      Christoph Lameter authored
      
      
      Set c->node to -1 if we allocate from a debug slab instead for SlabDebug
      which requires access the page struct cacheline.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Tested-by: default avatarAlexey Dobriyan <adobriyan@sw.ru>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ee3c72a1
    • Christoph Lameter's avatar
      SLUB: Move page->offset to kmem_cache_cpu->offset · b3fba8da
      Christoph Lameter authored
      
      
      We need the offset from the page struct during slab_alloc and slab_free. In
      both cases we also reference the cacheline of the kmem_cache_cpu structure.
      We can therefore move the offset field into the kmem_cache_cpu structure
      freeing up 16 bits in the page struct.
      
      Moving the offset allows an allocation from slab_alloc() without touching the
      page struct in the hot path.
      
      The only thing left in slab_free() that touches the page struct cacheline for
      per cpu freeing is the checking of SlabDebug(page). The next patch deals with
      that.
      
      Use the available 16 bits to broaden page->inuse. More than 64k objects per
      slab become possible and we can get rid of the checks for that limitation.
      
      No need anymore to shrink the order of slabs if we boot with 2M sized slabs
      (slub_min_order=9).
      
      No need anymore to switch off the offset calculation for very large slabs
      since the field in the kmem_cache_cpu structure is 32 bits and so the offset
      field can now handle slab sizes of up to 8GB.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b3fba8da
    • Christoph Lameter's avatar
      SLUB: Do not use page->mapping · 8e65d24c
      Christoph Lameter authored
      
      
      After moving the lockless_freelist to kmem_cache_cpu we no longer need
      page->lockless_freelist. Restructure the use of the struct page fields in
      such a way that we never touch the mapping field.
      
      This is turn allows us to remove the special casing of SLUB when determining
      the mapping of a page (needed for corner cases of virtual caches machines that
      need to flush caches of processors mapping a page).
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8e65d24c
    • Christoph Lameter's avatar
      SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slab · dfb4f096
      Christoph Lameter authored
      
      
      A remote free may access the same page struct that also contains the lockless
      freelist for the cpu slab. If objects have a short lifetime and are freed by
      a different processor then remote frees back to the slab from which we are
      currently allocating are frequent. The cacheline with the page struct needs
      to be repeately acquired in exclusive mode by both the allocating thread and
      the freeing thread. If this is frequent enough then performance will suffer
      because of cacheline bouncing.
      
      This patchset puts the lockless_freelist pointer in its own cacheline. In
      order to make that happen we introduce a per cpu structure called
      kmem_cache_cpu.
      
      Instead of keeping an array of pointers to page structs we now keep an array
      to a per cpu structure that--among other things--contains the pointer to the
      lockless freelist. The freeing thread can then keep possession of exclusive
      access to the page struct cacheline while the allocating thread keeps its
      exclusive access to the cacheline containing the per cpu structure.
      
      This works as long as the allocating cpu is able to service its request
      from the lockless freelist. If the lockless freelist runs empty then the
      allocating thread needs to acquire exclusive access to the cacheline with
      the page struct lock the slab.
      
      The allocating thread will then check if new objects were freed to the per
      cpu slab. If so it will keep the slab as the cpu slab and continue with the
      recently remote freed objects. So the allocating thread can take a series
      of just freed remote pages and dish them out again. Ideally allocations
      could be just recycling objects in the same slab this way which will lead
      to an ideal allocation / remote free pattern.
      
      The number of objects that can be handled in this way is limited by the
      capacity of one slab. Increasing slab size via slub_min_objects/
      slub_max_order may increase the number of objects and therefore performance.
      
      If the allocating thread runs out of objects and finds that no objects were
      put back by the remote processor then it will retrieve a new slab (from the
      partial lists or from the page allocator) and start with a whole
      new set of objects while the remote thread may still be freeing objects to
      the old cpu slab. This may then repeat until the new slab is also exhausted.
      If remote freeing has freed objects in the earlier slab then that earlier
      slab will now be on the partial freelist and the allocating thread will
      pick that slab next for allocation. So the loop is extended. However,
      both threads need to take the list_lock to make the swizzling via
      the partial list happen.
      
      It is likely that this kind of scheme will keep the objects being passed
      around to a small set that can be kept in the cpu caches leading to increased
      performance.
      
      More code cleanups become possible:
      
      - Instead of passing a cpu we can now pass a kmem_cache_cpu structure around.
        Allows reducing the number of parameters to various functions.
      - Can define a new node_match() function for NUMA to encapsulate locality
        checks.
      
      Effect on allocations:
      
      Cachelines touched before this patch:
      
      	Write:	page cache struct and first cacheline of object
      
      Cachelines touched after this patch:
      
      	Write:	kmem_cache_cpu cacheline and first cacheline of object
      	Read: page cache struct (but see later patch that avoids touching
      		that cacheline)
      
      The handling when the lockless alloc list runs empty gets to be a bit more
      complicated since another cacheline has now to be written to. But that is
      halfway out of the hot path.
      
      Effect on freeing:
      
      Cachelines touched before this patch:
      
      	Write: page_struct and first cacheline of object
      
      Cachelines touched after this patch depending on how we free:
      
        Write(to cpu_slab):	kmem_cache_cpu struct and first cacheline of object
        Write(to other):	page struct and first cacheline of object
      
        Read(to cpu_slab):	page struct to id slab etc. (but see later patch that
        			avoids touching the page struct on free)
        Read(to other):	cpu local kmem_cache_cpu struct to verify its not
        			the cpu slab.
      
      Summary:
      
      Pro:
      	- Distinct cachelines so that concurrent remote frees and local
      	  allocs on a cpuslab can occur without cacheline bouncing.
      	- Avoids potential bouncing cachelines because of neighboring
      	  per cpu pointer updates in kmem_cache's cpu_slab structure since
      	  it now grows to a cacheline (Therefore remove the comment
      	  that talks about that concern).
      
      Cons:
      	- Freeing objects now requires the reading of one additional
      	  cacheline. That can be mitigated for some cases by the following
      	  patches but its not possible to completely eliminate these
      	  references.
      
      	- Memory usage grows slightly.
      
      	The size of each per cpu object is blown up from one word
      	(pointing to the page_struct) to one cacheline with various data.
      	So this is NR_CPUS*NR_SLABS*L1_BYTES more memory use. Lets say
      	NR_SLABS is 100 and a cache line size of 128 then we have just
      	increased SLAB metadata requirements by 12.8k per cpu.
      	(Another later patch reduces these requirements)
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dfb4f096
    • Adrian Bunk's avatar
      mm/page_alloc.c: make code static · 484f51f8
      Adrian Bunk authored
      
      
      This patch makes needlessly global code static.
      Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      484f51f8
    • Mel Gorman's avatar
      Print out statistics in relation to fragmentation avoidance to /proc/pagetypeinfo · 467c996c
      Mel Gorman authored
      
      
      This patch provides fragmentation avoidance statistics via /proc/pagetypeinfo.
       The information is collected only on request so there is no runtime overhead.
       The statistics are in three parts:
      
      The first part prints information on the size of blocks that pages are
      being grouped on and looks like
      
      Page block order: 10
      Pages per block:  1024
      
      The second part is a more detailed version of /proc/buddyinfo and looks like
      
      Free pages count per migrate type at order       0      1      2      3      4      5      6      7      8      9     10
      Node    0, zone      DMA, type    Unmovable      0      0      0      0      0      0      0      0      0      0      0
      Node    0, zone      DMA, type  Reclaimable      1      0      0      0      0      0      0      0      0      0      0
      Node    0, zone      DMA, type      Movable      0      0      0      0      0      0      0      0      0      0      0
      Node    0, zone      DMA, type      Reserve      0      4      4      0      0      0      0      1      0      1      0
      Node    0, zone   Normal, type    Unmovable    111      8      4      4      2      3      1      0      0      0      0
      Node    0, zone   Normal, type  Reclaimable    293     89      8      0      0      0      0      0      0      0      0
      Node    0, zone   Normal, type      Movable      1      6     13      9      7      6      3      0      0      0      0
      Node    0, zone   Normal, type      Reserve      0      0      0      0      0      0      0      0      0      0      4
      
      The third part looks like
      
      Number of blocks type     Unmovable  Reclaimable      Movable      Reserve
      Node 0, zone      DMA            0            1            2            1
      Node 0, zone   Normal            3           17           94            4
      
      To walk the zones within a node with interrupts disabled, walk_zones_in_node()
      is introduced and shared between /proc/buddyinfo, /proc/zoneinfo and
      /proc/pagetypeinfo to reduce code duplication.  It seems specific to what
      vmstat.c requires but could be broken out as a general utility function in
      mmzone.c if there were other other potential users.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Acked-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      467c996c
    • Mel Gorman's avatar
      Do not depend on MAX_ORDER when grouping pages by mobility · d9c23400
      Mel Gorman authored
      
      
      Currently mobility grouping works at the MAX_ORDER_NR_PAGES level.  This makes
      sense for the majority of users where this is also the huge page size.
      However, on platforms like ia64 where the huge page size is runtime
      configurable it is desirable to group at a lower order.  On x86_64 and
      occasionally on x86, the hugepage size may not always be MAX_ORDER_NR_PAGES.
      
      This patch groups pages together based on the value of HUGETLB_PAGE_ORDER.  It
      uses a compile-time constant if possible and a variable where the huge page
      size is runtime configurable.
      
      It is assumed that grouping should be done at the lowest sensible order and
      that the user would not want to override this.  If this is not true,
      page_block order could be forced to a variable initialised via a boot-time
      kernel parameter.
      
      One potential issue with this patch is that IA64 now parses hugepagesz with
      early_param() instead of __setup().  __setup() is called after the memory
      allocator has been initialised and the pageblock bitmaps already setup.  In
      tests on one IA64 there did not seem to be any problem with using
      early_param() and in fact may be more correct as it guarantees the parameter
      is handled before the parsing of hugepages=.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Acked-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d9c23400
    • Mel Gorman's avatar
      Fix calculation in move_freepages_block for counting pages · d100313f
      Mel Gorman authored
      
      
      move_freepages_block() returns the number of blocks moved.  This value is used
      to determine if a block of pages should be stolen for the exclusive use of a
      migrate type or not.  However, the value returned is being used correctly.
      This patch fixes the calculation to return the number of base pages that have
      been moved.
      
      This should be considered a fix to the patch
      move-free-pages-between-lists-on-steal.patch
      
      Credit to Andy Whitcroft for spotting the problem.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Acked-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d100313f
    • Mel Gorman's avatar
      don't group high order atomic allocations · 64c5e135
      Mel Gorman authored
      
      
      Grouping high-order atomic allocations together was intended to allow
      bursty users of atomic allocations to work such as e1000 in situations
      where their preallocated buffers were depleted.  This did not work in at
      least one case with a wireless network adapter needing order-1 allocations
      frequently.  To resolve that, the free pages used for min_free_kbytes were
      moved to separate contiguous blocks with the patch
      bias-the-location-of-pages-freed-for-min_free_kbytes-in-the-same-max_order_nr_pages-blocks.
      
      It is felt that keeping the free pages in the same contiguous blocks should
      be sufficient for bursty short-lived high-order atomic allocations to
      succeed, maybe even with the e1000.  Even if there is a failure, increasing
      the value of min_free_kbytes will free pages as contiguous bloks in
      contrast to the standard buddy allocator which makes no attempt to keep the
      minimum number of free pages contiguous.
      
      This patch backs out grouping high order atomic allocations together to
      determine if it is really needed or not.  If a new report comes in about
      high-order atomic allocations failing, the feature can be reintroduced to
      determine if it fixes the problem or not.  As a side-effect, this patch
      reduces by 1 the number of bits required to track the mobility type of
      pages within a MAX_ORDER_NR_PAGES block.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      64c5e135
    • Mel Gorman's avatar
      remove PAGE_GROUP_BY_MOBILITY · ac0e5b7a
      Mel Gorman authored
      
      
      Grouping pages by mobility can be disabled at compile-time. This was
      considered undesirable by a number of people. However, in the current stack of
      patches, it is not a simple case of just dropping the configurable patch as it
      would cause merge conflicts.  This patch backs out the configuration option.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ac0e5b7a
    • Mel Gorman's avatar
      Bias the location of pages freed for min_free_kbytes in the same MAX_ORDER_NR_PAGES blocks · 56fd56b8
      Mel Gorman authored
      
      
      The standard buddy allocator always favours the smallest block of pages.
      The effect of this is that the pages free to satisfy min_free_kbytes tends
      to be preserved since boot time at the same location of memory ffor a very
      long time and as a contiguous block.  When an administrator sets the
      reserve at 16384 at boot time, it tends to be the same MAX_ORDER blocks
      that remain free.  This allows the occasional high atomic allocation to
      succeed up until the point the blocks are split.  In practice, it is
      difficult to split these blocks but when they do split, the benefit of
      having min_free_kbytes for contiguous blocks disappears.  Additionally,
      increasing min_free_kbytes once the system has been running for some time
      has no guarantee of creating contiguous blocks.
      
      On the other hand, CONFIG_PAGE_GROUP_BY_MOBILITY favours splitting large
      blocks when there are no free pages of the appropriate type available.  A
      side-effect of this is that all blocks in memory tends to be used up and
      the contiguous free blocks from boot time are not preserved like in the
      vanilla allocator.  This can cause a problem if a new caller is unwilling
      to reclaim or does not reclaim for long enough.
      
      A failure scenario was found for a wireless network device allocating
      order-1 atomic allocations but the allocations were not intense or frequent
      enough for a whole block of pages to be preserved for MIGRATE_HIGHALLOC.
      This was reproduced on a desktop by booting with mem=256mb, forcing the
      driver to allocate at order-1, running a bittorrent client (downloading a
      debian ISO) and building a kernel with -j2.
      
      This patch addresses the problem on the desktop machine booted with
      mem=256mb.  It works by setting aside a reserve of MAX_ORDER_NR_PAGES
      blocks, the number of which depends on the value of min_free_kbytes.  These
      blocks are only fallen back to when there is no other free pages.  Then the
      smallest possible page is used just like the normal buddy allocator instead
      of the largest possible page to preserve contiguous pages The pages in free
      lists in the reserve blocks are never taken for another migrate type.  The
      results is that even if min_free_kbytes is set to a low value, contiguous
      blocks will be preserved in the MIGRATE_RESERVE blocks.
      
      This works better than the vanilla allocator because if min_free_kbytes is
      increased, a new reserve block will be chosen based on the location of
      reclaimable pages and the block will free up as contiguous pages.  In the
      vanilla allocator, no effort is made to target a block of pages to free as
      contiguous pages and min_free_kbytes pages are scattered randomly.
      
      This effect has been observed on the test machine.  min_free_kbytes was set
      initially low but it was kept as a contiguous free block within
      MIGRATE_RESERVE.  min_free_kbytes was then set to a higher value and over a
      period of time, the free blocks were within the reserve and coalescing.
      How long it takes to free up depends on how quickly LRU is rotating.
      Amusingly, this means that more activity will free the blocks faster.
      
      This mechanism potentially replaces MIGRATE_HIGHALLOC as it may be more
      effective than grouping contiguous free pages together.  It all depends on
      whether the number of active atomic high allocations exceeds
      min_free_kbytes or not.  If the number of active allocations exceeds
      min_free_kbytes, it's worth it but maybe in that situation, min_free_kbytes
      should be set higher.  Once there are no more reports of allocation
      failures, a patch will be submitted that backs out MIGRATE_HIGHALLOC and
      see if the reports stay missing.
      
      Credit to Mariusz Kozlowski for discovering the problem, describing the
      failure scenario and testing patches and scenarios.
      
      [akpm@linux-foundation.org: cleanups]
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      56fd56b8
    • Mel Gorman's avatar
      Fix corruption of memmap on IA64 SPARSEMEM when mem_section is not a power of 2 · 5c0e3066
      Mel Gorman authored
      
      
      There are problems in the use of SPARSEMEM and pageblock flags that causes
      problems on ia64.
      
      The first part of the problem is that units are incorrect in
      SECTION_BLOCKFLAGS_BITS computation.  This results in a map_section's
      section_mem_map being treated as part of a bitmap which isn't good.  This
      was evident with an invalid virtual address when mem_init attempted to free
      bootmem pages while relinquishing control from the bootmem allocator.
      
      The second part of the problem occurs because the pageblock flags bitmap is
      be located with the mem_section.  The SECTIONS_PER_ROOT computation using
      sizeof (mem_section) may not be a power of 2 depending on the size of the
      bitmap.  This renders masks and other such things not power of 2 base.
      This issue was seen with SPARSEMEM_EXTREME on ia64.  This patch moves the
      bitmap outside of mem_section and uses a pointer instead in the
      mem_section.  The bitmaps are allocated when the section is being
      initialised.
      
      Note that sparse_early_usemap_alloc() does not use alloc_remap() like
      sparse_early_mem_map_alloc().  The allocation required for the bitmap on
      x86, the only architecture that uses alloc_remap is typically smaller than
      a cache line.  alloc_remap() pads out allocations to the cache size which
      would be a needless waste.
      
      Credit to Bob Picco for identifying the original problem and effecting a
      fix for the SECTION_BLOCKFLAGS_BITS calculation.  Credit to Andy Whitcroft
      for devising the best way of allocating the bitmaps only when required for
      the section.
      
      [wli@holomorphy.com: warning fix]
      Signed-off-by: default avatarBob Picco <bob.picco@hp.com>
      Signed-off-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: default avatarWilliam Irwin <bill.irwin@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5c0e3066
    • Mel Gorman's avatar
      Be more agressive about stealing when MIGRATE_RECLAIMABLE allocations fallback · 46dafbca
      Mel Gorman authored
      
      
      MIGRATE_RECLAIMABLE allocations tend to be very bursty in nature like when
      updatedb starts.  It is likely this will occur in situations where MAX_ORDER
      blocks of pages are not free.  This means that updatedb can scatter
      MIGRATE_RECLAIMABLE pages throughout the address space.  This patch is more
      agressive about stealing blocks of pages for MIGRATE_RECLAIMABLE.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      46dafbca
    • Mel Gorman's avatar
      Bias the placement of kernel pages at lower PFNs · 5adc5be7
      Mel Gorman authored
      
      
      This patch chooses blocks with lower PFNs when placing kernel allocations.
      This is particularly important during fallback in low memory situations to
      stop unmovable pages being placed throughout the entire address space.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5adc5be7
    • Mel Gorman's avatar
      Do not group pages by mobility type on low memory systems · 9ef9acb0
      Mel Gorman authored
      
      
      Grouping pages by mobility can only successfully operate when there are more
      MAX_ORDER_NR_PAGES areas than mobility types.  When there are insufficient
      areas, fallbacks cannot be avoided.  This has noticeable performance impacts
      on machines with small amounts of memory in comparison to MAX_ORDER_NR_PAGES.
      For example, on IA64 with a configuration including huge pages spans 1GiB with
      MAX_ORDER_NR_PAGES so would need at least 4GiB of RAM before grouping pages by
      mobility would be useful.  In comparison, an x86 would need 16MB.
      
      This patch checks the size of vm_total_pages in build_all_zonelists(). If
      there are not enough areas,  mobility is effectivly disabled by considering
      all allocations as the same type (UNMOVABLE).  This is achived via a
      __read_mostly flag.
      
      With this patch, performance is comparable to disabling grouping pages
      by mobility at compile-time on a test machine with insufficient memory.
      With this patch, it is reasonable to get rid of grouping pages by mobility
      a compile-time option.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9ef9acb0
    • Mel Gorman's avatar
      Group high-order atomic allocations · e010487d
      Mel Gorman authored
      
      
      In rare cases, the kernel needs to allocate a high-order block of pages
      without sleeping.  For example, this is the case with e1000 cards configured
      to use jumbo frames.  Migrating or reclaiming pages in this situation is not
      an option.
      
      This patch groups these allocations together as much as possible by adding a
      new MIGRATE_TYPE.  The MIGRATE_HIGHATOMIC type are exactly what they sound
      like.  Care is taken that pages of other migrate types do not use the same
      blocks as high-order atomic allocations.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e010487d
    • Mel Gorman's avatar
      Group short-lived and reclaimable kernel allocations · e12ba74d
      Mel Gorman authored
      
      
      This patch marks a number of allocations that are either short-lived such as
      network buffers or are reclaimable such as inode allocations.  When something
      like updatedb is called, long-lived and unmovable kernel allocations tend to
      be spread throughout the address space which increases fragmentation.
      
      This patch groups these allocations together as much as possible by adding a
      new MIGRATE_TYPE.  The MIGRATE_RECLAIMABLE type is for allocations that can be
      reclaimed on demand, but not moved.  i.e.  they can be migrated by deleting
      them and re-reading the information from elsewhere.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e12ba74d
    • Mel Gorman's avatar
      Move free pages between lists on steal · c361be55
      Mel Gorman authored
      
      
      When a fallback occurs, there will be free pages for one allocation type
      stored on the list for another.  When a large steal occurs, this patch will
      move all the free pages within one list to the other.
      
      [y-goto@jp.fujitsu.com: fix BUG_ON check at move_freepages()]
      [apw@shadowen.org: Move to using pfn_valid_within()]
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: Christoph Lameter <clameter@engr.sgi.com>
      Signed-off-by: default avatarYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: default avatarAndy Whitcroft <andyw@uk.ibm.com>
      Cc: Bob Picco <bob.picco@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c361be55
    • Mel Gorman's avatar
      Drain per-cpu lists when high-order allocations fail · e2c55dc8
      Mel Gorman authored
      
      
      Per-cpu pages can accidentally cause fragmentation because they are free, but
      pinned pages in an otherwise contiguous block.  When this patch is applied,
      the per-cpu caches are drained after the direct-reclaim is entered if the
      requested order is greater than 0.  It simply reuses the code used by suspend
      and hotplug.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e2c55dc8
    • Mel Gorman's avatar
      Add a configure option to group pages by mobility · b92a6edd
      Mel Gorman authored
      
      
      The grouping mechanism has some memory overhead and a more complex allocation
      path.  This patch allows the strategy to be disabled for small memory systems
      or if it is known the workload is suffering because of the strategy.  It also
      acts to show where the page groupings strategy interacts with the standard
      buddy allocator.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarJoel Schopp <jschopp@austin.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b92a6edd
    • Mel Gorman's avatar
      Choose pages from the per-cpu list based on migration type · 535131e6
      Mel Gorman authored
      
      
      The freelists for each migrate type can slowly become polluted due to the
      per-cpu list.  Consider what happens when the following happens
      
      1. A 2^(MAX_ORDER-1) list is reserved for __GFP_MOVABLE pages
      2. An order-0 page is allocated from the newly reserved block
      3. The page is freed and placed on the per-cpu list
      4. alloc_page() is called with GFP_KERNEL as the gfp_mask
      5. The per-cpu list is used to satisfy the allocation
      
      This results in a kernel page is in the middle of a migratable region. This
      patch prevents this leak occuring by storing the MIGRATE_ type of the page in
      page->private. On allocate, a page will only be returned of the desired type,
      else more pages will be allocated. This may temporarily allow a per-cpu list
      to go over the pcp->high limit but it'll be corrected on the next free. Care
      is taken to preserve the hotness of pages recently freed.
      
      The additional code is not measurably slower for the workloads we've tested.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      535131e6
    • Mel Gorman's avatar
      Split the free lists for movable and unmovable allocations · b2a0ac88
      Mel Gorman authored
      
      
      This patch adds the core of the fragmentation reduction strategy.  It works by
      grouping pages together based on their ability to migrate or be reclaimed.
      Basically, it works by breaking the list in zone->free_area list into
      MIGRATE_TYPES number of lists.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b2a0ac88
    • Mel Gorman's avatar
      Add a bitmap that is used to track flags affecting a block of pages · 835c134e
      Mel Gorman authored
      
      
      Here is the latest revision of the anti-fragmentation patches.  Of particular
      note in this version is special treatment of high-order atomic allocations.
      Care is taken to group them together and avoid grouping pages of other types
      near them.  Artifical tests imply that it works.  I'm trying to get the
      hardware together that would allow setting up of a "real" test.  If anyone
      already has a setup and test that can trigger the atomic-allocation problem,
      I'd appreciate a test of these patches and a report.  The second major change
      is that these patches will apply cleanly with patches that implement
      anti-fragmentation through zones.
      
      kernbench shows effectively no performance difference varying between -0.2%
      and +2% on a variety of test machines.  Success rates for huge page allocation
      are dramatically increased.  For example, on a ppc64 machine, the vanilla
      kernel was only able to allocate 1% of memory as a hugepage and this was due
      to a single hugepage reserved as min_free_kbytes.  With these patches applied,
      17% was allocatable as superpages.  With reclaim-related fixes from Andy
      Whitcroft, it was 40% and further reclaim-related improvements should increase
      this further.
      
      Changelog Since V28
      o Group high-order atomic allocations together
      o It is no longer required to set min_free_kbytes to 10% of memory. A value
        of 16384 in most cases will be sufficient
      o Now applied with zone-based anti-fragmentation
      o Fix incorrect VM_BUG_ON within buffered_rmqueue()
      o Reorder the stack so later patches do not back out work from earlier patches
      o Fix bug were journal pages were being treated as movable
      o Bias placement of non-movable pages to lower PFNs
      o More agressive clustering of reclaimable pages in reactions to workloads
        like updatedb that flood the size of inode caches
      
      Changelog Since V27
      
      o Renamed anti-fragmentation to Page Clustering. Anti-fragmentation was giving
        the mistaken impression that it was the 100% solution for high order
        allocations. Instead, it greatly increases the chances high-order
        allocations will succeed and lays the foundation for defragmentation and
        memory hot-remove to work properly
      o Redefine page groupings based on ability to migrate or reclaim instead of
        basing on reclaimability alone
      o Get rid of spurious inits
      o Per-cpu lists are no longer split up per-type. Instead the per-cpu list is
        searched for a page of the appropriate type
      o Added more explanation commentary
      o Fix up bug in pageblock code where bitmap was used before being initalised
      
      Changelog Since V26
      o Fix double init of lists in setup_pageset
      
      Changelog Since V25
      o Fix loop order of for_each_rclmtype_order so that order of loop matches args
      o gfpflags_to_rclmtype uses gfp_t instead of unsigned long
      o Rename get_pageblock_type() to get_page_rclmtype()
      o Fix alignment problem in move_freepages()
      o Add mechanism for assigning flags to blocks of pages instead of page->flags
      o On fallback, do not examine the preferred list of free pages a second time
      
      The purpose of these patches is to reduce external fragmentation by grouping
      pages of related types together.  When pages are migrated (or reclaimed under
      memory pressure), large contiguous pages will be freed.
      
      This patch works by categorising allocations by their ability to migrate;
      
      Movable - The pages may be moved with the page migration mechanism. These are
      	generally userspace pages.
      
      Reclaimable - These are allocations for some kernel caches that are
      	reclaimable or allocations that are known to be very short-lived.
      
      Unmovable - These are pages that are allocated by the kernel that
      	are not trivially reclaimed. For example, the memory allocated for a
      	loaded module would be in this category. By default, allocations are
      	considered to be of this type
      
      HighAtomic - These are high-order allocations belonging to callers that
      	cannot sleep or perform any IO. In practice, this is restricted to
      	jumbo frame allocation for network receive. It is assumed that the
      	allocations are short-lived
      
      Instead of having one MAX_ORDER-sized array of free lists in struct free_area,
      there is one for each type of reclaimability.  Once a 2^MAX_ORDER block of
      pages is split for a type of allocation, it is added to the free-lists for
      that type, in effect reserving it.  Hence, over time, pages of the different
      types can be clustered together.
      
      When the preferred freelists are expired, the largest possible block is taken
      from an alternative list.  Buddies that are split from that large block are
      placed on the preferred allocation-type freelists to mitigate fragmentation.
      
      This implementation gives best-effort for low fragmentation in all zones.
      Ideally, min_free_kbytes needs to be set to a value equal to 4 * (1 <<
      (MAX_ORDER-1)) pages in most cases.  This would be 16384 on x86 and x86_64 for
      example.
      
      Our tests show that about 60-70% of physical memory can be allocated on a
      desktop after a few days uptime.  In benchmarks and stress tests, we are
      finding that 80% of memory is available as contiguous blocks at the end of the
      test.  To compare, a standard kernel was getting < 1% of memory as large pages
      on a desktop and about 8-12% of memory as large pages at the end of stress
      tests.
      
      Following this email are 12 patches that implement thie page grouping feature.
       The first patch introduces a mechanism for storing flags related to a whole
      block of pages.  Then allocations are split between movable and all other
      allocations.  Following that are patches to deal with per-cpu pages and make
      the mechanism configurable.  The next patch moves free pages between lists
      when partially allocated blocks are used for pages of another migrate type.
      The second last patch groups reclaimable kernel allocations such as inode
      caches together.  The final patch related to groupings keeps high-order atomic
      allocations.
      
      The last two patches are more concerned with control of fragmentation.  The
      second last patch biases placement of non-movable allocations towards the
      start of memory.  This is with a view of supporting memory hot-remove of DIMMs
      with higher PFNs in the future.  The biasing could be enforced a lot heavier
      but it would cost.  The last patch agressively clusters reclaimable pages like
      inode caches together.
      
      The fragmentation reduction strategy needs to track if pages within a block
      can be moved or reclaimed so that pages are freed to the appropriate list.
      This patch adds a bitmap for flags affecting a whole a MAX_ORDER block of
      pages.
      
      In non-SPARSEMEM configurations, the bitmap is stored in the struct zone and
      allocated during initialisation.  SPARSEMEM statically allocates the bitmap in
      a struct mem_section so that bitmaps do not have to be resized during memory
      hotadd.  This wastes a small amount of memory per unused section (usually
      sizeof(unsigned long)) but the complexity of dynamically allocating the memory
      is quite high.
      
      Additional credit to Andy Whitcroft who reviewed up an earlier implementation
      of the mechanism an suggested how to make it a *lot* cleaner.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      835c134e
    • KAMEZAWA Hiroyuki's avatar
      flush icache before set_pte() on ia64: flush icache at set_pte · 954ffcb3
      KAMEZAWA Hiroyuki authored
      Current ia64 kernel flushes icache by lazy_mmu_prot_update() *after*
      set_pte().  This is too late.  This patch removes lazy_mmu_prot_update and
      add modfied set_pte() for flushing if necessary.
      
      This patch flush icache of a page when
      	new pte has exec bit.
      	&& new pte has present bit
      	&& new pte is user's page.
      	&& (old *ptep is not present
                  || new pte's pfn is not same to old *ptep's ptn)
      	&& new pte's page has no Pg_arch_1 bit.
      	   Pg_arch_1 is set when a page is cache consistent.
      
      I think this condition checks are much easier to understand than considering
      "Where sync_icache_dcache() should be inserted ?".
      
      pte_user() for ia64 was removed by http://lkml.org/lkml/2007/6/12/67
      
       as
      clean-up. So, I added it again.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      954ffcb3
    • KAMEZAWA Hiroyuki's avatar
      flush cache before installing new page at migraton · 97ee0524
      KAMEZAWA Hiroyuki authored
      
      
      In migration, a new page should be cache flushed before set_pte() in some
      archs which have virtually-tagged cache.
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      97ee0524
    • Andrea Arcangeli's avatar
      make swappiness safer to use · 4106f83a
      Andrea Arcangeli authored
      
      
      Swappiness isn't a safe sysctl.  Setting it to 0 for example can hang a
      system.  That's a corner case but even setting it to 10 or lower can waste
      enormous amounts of cpu without making much progress.  We've customers who
      wants to use swappiness but they can't because of the current
      implementation (if you change it so the system stops swapping it really
      stops swapping and nothing works sane anymore if you really had to swap
      something to make progress).
      
      This patch from Kurt Garloff makes swappiness safer to use (no more huge
      cpu usage or hangs with low swappiness values).
      
      I think the prev_priority can also be nuked since it wastes 4 bytes per
      zone (that would be an incremental patch but I wait the nr_scan_[in]active
      to be nuked first for similar reasons).  Clearly somebody at some point
      noticed how broken that thing was and they had to add min(priority,
      prev_priority) to give it some reliability, but they didn't go the last
      mile to nuke prev_priority too.  Calculating distress only in function of
      not-racy priority is correct and sure more than enough without having to
      add randomness into the equation.
      
      Patch is tested on older kernels but it compiles and it's quite simple
      so...
      
      Overall I'm not very satisified by the swappiness tweak, since it doesn't
      rally do anything with the dirty pagecache that may be inactive.  We need
      another kind of tweak that controls the inactive scan and tunes the
      can_writepage feature (not yet in mainline despite having submitted it a
      few times), not only the active one.  That new tweak will tell the kernel
      how hard to scan the inactive list for pure clean pagecache (something the
      mainline kernel isn't capable of yet).  We already have that feature
      working in all our enterprise kernels with the default reasonable tune, or
      they can't even run a readonly backup with tar without triggering huge
      write I/O.  I think it should be available also in mainline later.
      
      Cc: Nick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarKurt Garloff <garloff@suse.de>
      Signed-off-by: default avatarAndrea Arcangeli <andrea@suse.de>
      Signed-off-by: default avatarFengguang Wu <wfg@mail.ustc.edu.cn>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4106f83a
    • Christoph Lameter's avatar
      Categorize GFP flags · 6cb06229
      Christoph Lameter authored
      
      
      The function of GFP_LEVEL_MASK seems to be unclear.  In order to clear up
      the mystery we get rid of it and replace GFP_LEVEL_MASK with 3 sets of GFP
      flags:
      
      GFP_RECLAIM_MASK	Flags used to control page allocator reclaim behavior.
      
      GFP_CONSTRAINT_MASK	Flags used to limit where allocations can occur.
      
      GFP_SLAB_BUG_MASK	Flags that the slab allocator BUG()s on.
      
      These replace the uses of GFP_LEVEL mask in the slab allocators and in
      vmalloc.c.
      
      The use of the flags not included in these sets may occur as a result of a
      slab allocation standing in for a page allocation when constructing scatter
      gather lists.  Extraneous flags are cleared and not passed through to the
      page allocator.  __GFP_MOVABLE/RECLAIMABLE, __GFP_COLD and __GFP_COMP will
      now be ignored if passed to a slab allocator.
      
      Change the allocation of allocator meta data in SLAB and vmalloc to not
      pass through flags listed in GFP_CONSTRAINT_MASK.  SLAB already removes the
      __GFP_THISNODE flag for such allocations.  Generalize that to also cover
      vmalloc.  The use of GFP_CONSTRAINT_MASK also includes __GFP_HARDWALL.
      
      The impact of allocator metadata placement on access latency to the
      cachelines of the object itself is minimal since metadata is only
      referenced on alloc and free.  The attempt is still made to place the meta
      data optimally but we consistently allow fallback both in SLAB and vmalloc
      (SLUB does not need to allocate metadata like that).
      
      Allocator metadata may serve multiple in kernel users and thus should not
      be subject to the limitations arising from a single allocation context.
      
      [akpm@linux-foundation.org: fix fallback_alloc()]
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6cb06229
    • Yasunori Goto's avatar
      Fix panic of cpu online with memory less node · 58c0a4a7
      Yasunori Goto authored
      
      
      When a cpu is onlined on memory-less-node box, kernel panics due to touch
      NULL pointer of pgdat->kswapd.  Current kswapd runs only nodes which have
      memory.  So, calling of set_cpus_allowed() is not necessary for memory-less
      node.
      
      This is fix for it.
      Signed-off-by: default avatarYasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      58c0a4a7
    • Lee Schermerhorn's avatar
      memoryless nodes: fixup uses of node_online_map in generic code · 37b07e41
      Lee Schermerhorn authored
      
      
      Here's a cut at fixing up uses of the online node map in generic code.
      
      mm/shmem.c:shmem_parse_mpol()
      
      	Ensure nodelist is subset of nodes with memory.
      	Use node_states[N_HIGH_MEMORY] as default for missing
      	nodelist for interleave policy.
      
      mm/shmem.c:shmem_fill_super()
      
      	initialize policy_nodes to node_states[N_HIGH_MEMORY]
      
      mm/page-writeback.c:highmem_dirtyable_memory()
      
      	sum over nodes with memory
      
      mm/page_alloc.c:zlc_setup()
      
      	allowednodes - use nodes with memory.
      
      mm/page_alloc.c:default_zonelist_order()
      
      	average over nodes with memory.
      
      mm/page_alloc.c:find_next_best_node()
      
      	skip nodes w/o memory.
      	N_HIGH_MEMORY state mask may not be initialized at this time,
      	unless we want to depend on early_calculate_totalpages() [see
      	below].  Will ZONE_MOVABLE ever be configurable?
      
      mm/page_alloc.c:find_zone_movable_pfns_for_nodes()
      
      	spread kernelcore over nodes with memory.
      
      	This required calling early_calculate_totalpages()
      	unconditionally, and populating N_HIGH_MEMORY node
      	state therein from nodes in the early_node_map[].
      	If we can depend on this, we can eliminate the
      	population of N_HIGH_MEMORY mask from __build_all_zonelists()
      	and use the N_HIGH_MEMORY mask in find_next_best_node().
      
      mm/mempolicy.c:mpol_check_policy()
      
      	Ensure nodes specified for policy are subset of
      	nodes with memory.
      
      [akpm@linux-foundation.org: fix warnings]
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: default avatarChristoph Lameter <clameter@sgi.com>
      Cc: Shaohua Li <shaohua.li@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      37b07e41
    • Christoph Lameter's avatar
      Memoryless nodes: Use N_HIGH_MEMORY for cpusets · 0e1e7c7a
      Christoph Lameter authored
      
      
      cpusets try to ensure that any node added to a cpuset's mems_allowed is
      on-line and contains memory.  The assumption was that online nodes contained
      memory.  Thus, it is possible to add memoryless nodes to a cpuset and then add
      tasks to this cpuset.  This results in continuous series of oom-kill and
      apparent system hang.
      
      Change cpusets to use node_states[N_HIGH_MEMORY] [a.k.a.  node_memory_map] in
      place of node_online_map when vetting memories.  Return error if admin
      attempts to write a non-empty mems_allowed node mask containing only
      memoryless-nodes.
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: default avatarBob Picco <bob.picco@hp.com>
      Signed-off-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0e1e7c7a
    • Christoph Lameter's avatar
      Memoryless nodes: Fix GFP_THISNODE behavior · 523b9458
      Christoph Lameter authored
      
      
      GFP_THISNODE checks that the zone selected is within the pgdat (node) of the
      first zone of a nodelist.  That only works if the node has memory.  A
      memoryless node will have its first node on another pgdat (node).
      
      GFP_THISNODE currently will return simply memory on the first pgdat.  Thus it
      is returning memory on other nodes.  GFP_THISNODE should fail if there is no
      local memory on a node.
      
      Add a new set of zonelists for each node that only contain the nodes that
      belong to the zones itself so that no fallback is possible.
      
      Then modify gfp_type to pickup the right zone based on the presence of
      __GFP_THISNODE.
      
      Drop the existing GFP_THISNODE checks from the page_allocators hot path.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Tested-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: default avatarBob Picco <bob.picco@hp.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      523b9458
    • Christoph Lameter's avatar
      Memoryless nodes: drop one memoryless node boot warning · 633c0666
      Christoph Lameter authored
      
      
      get_pfn_range_for_nid() is called multiple times for each node at boot time.
      Each time, it will warn about nodes with no memory, resulting in boot messages
      like:
      
              Node 0 active with no memory
              Node 0 active with no memory
              Node 0 active with no memory
              Node 0 active with no memory
              Node 0 active with no memory
              Node 0 active with no memory
              On node 0 totalpages: 0
              Node 0 active with no memory
              Node 0 active with no memory
                DMA zone: 0 pages used for memmap
              Node 0 active with no memory
              Node 0 active with no memory
                Normal zone: 0 pages used for memmap
              Node 0 active with no memory
              Node 0 active with no memory
                Movable zone: 0 pages used for memmap
      
      and so on for each memoryless node.
      
      We already have the "On node N totalpages: ..." and other related messages, so
      drop the "Node N active with no memory" warnings.
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Bob Picco <bob.picco@hp.com>
      Cc: Nishanth Aravamudan <nacc@us.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      633c0666
    • Christoph Lameter's avatar
      Memoryless nodes: Add N_CPU node state · 37c0708d
      Christoph Lameter authored
      
      
      We need the check for a node with cpu in zone reclaim.  Zone reclaim will not
      allow remote zone reclaim if a node has a cpu.
      
      [Lee.Schermerhorn@hp.com: Move setup of N_CPU node state mask]
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Tested-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: default avatarBob Picco <bob.picco@hp.com>
      Cc: Nishanth Aravamudan <nacc@us.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      37c0708d
    • Christoph Lameter's avatar
      Memoryless nodes: Update memory policy and page migration · 56bbd65d
      Christoph Lameter authored
      
      
      Online nodes now may have no memory.  The checks and initialization must
      therefore be changed to no longer use the online functions.
      
      This will correctly initialize the interleave on bootup to only target nodes
      with memory and will make sys_move_pages return an error when a page is to be
      moved to a memoryless node.  Similarly we will get an error if MPOL_BIND and
      MPOL_INTERLEAVE is used on a memoryless node.
      
      These are somewhat new semantics.  So far one could specify memoryless nodes
      and we would maybe do the right thing and just ignore the node (or we'd do
      something strange like with MPOL_INTERLEAVE).  If we want to allow the
      specification of memoryless nodes via memory policies then we need to keep
      checking for online nodes.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Tested-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: default avatarBob Picco <bob.picco@hp.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      56bbd65d
    • Christoph Lameter's avatar
      Memoryless nodes: Allow profiling data to fall back to other nodes · 4199cfa0
      Christoph Lameter authored
      
      
      Processors on memoryless nodes must be able to fall back to remote nodes in
      order to get a profiling buffer.  This may lead to excessive NUMA traffic but
      I think we should allow this rather than failing.
      Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Acked-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: default avatarBob Picco <bob.picco@hp.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4199cfa0