Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Register
  • Sign in
  • X xcap-capability-linux
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • Deployments
    • Deployments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • Repository
  • Wiki
    • Wiki
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • xcap
  • xcap-capability-linux
  • Repository
Switch branch/tag
  • xcap-capability-linux
  • include
  • linux
  • mmzone.h
Find file BlameHistoryPermalink
  • Yasuaki Ishimatsu's avatar
    mm: get rid of unnecessary pageblock scanning in setup_zone_migrate_reserve · 943dca1a
    Yasuaki Ishimatsu authored Jan 21, 2014
    Yasuaki Ishimatsu reported memory hot-add spent more than 5 _hours_ on
    9TB memory machine since onlining memory sections is too slow.  And we
    found out setup_zone_migrate_reserve spent >90% of the time.
    
    The problem is, setup_zone_migrate_reserve scans all pageblocks
    unconditionally, but it is only necessary if the number of reserved
    block was reduced (i.e.  memory hot remove).
    
    Moreover, maximum MIGRATE_RESERVE per zone is currently 2.  It means
    that the number of reserved pageblocks is almost always unchanged.
    
    This patch adds zone->nr_migrate_reserve_block to maintain the number of
    MIGRATE_RESERVE pageblocks and it reduces the overhead of
    setup_zone_migrate_reserve dramatically.  The following table shows time
    of onlining a memory section.
    
      Amount of memory     | 128GB | 192GB | 256GB|
      ---------------------------------------------
      linux-3.12           |  23.9 |  31.4 | 44.5 |
      This patch           |   8.3 |   8.3 |  8.6 |
      Mel's proposal patch |  10.9 |  19.2 | 31.3 |
      ---------------------------------------------
                                       (millisecond)
    
      128GB : 4 nodes and each node has 32GB of memory
      192GB : 6 nodes and each node has 32GB of memory
      256GB : 8 nodes and each node has 32GB of memory
    
      (*1) Mel proposed his idea by the following threads.
           https://lkml.org/lkml/2013/10/30/272
    
    
    
    [akpm@linux-foundation.org: tweak comment]
    Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
    Signed-off-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
    Reported-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
    Tested-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
    Cc: Mel Gorman <mgorman@suse.de>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    943dca1a