Skip to content
  • Joonsoo Kim's avatar
    mm/rmap: recompute pgoff for huge page · b854f711
    Joonsoo Kim authored
    Rmap traversing is used in five different cases, try_to_unmap(),
    try_to_munlock(), page_referenced(), page_mkclean() and
    remove_migration_ptes().  Each one implements its own traversing
    functions for the cases, anon, file, ksm, respectively.  These cause
    lots of duplications and cause maintenance overhead.  They also make
    codes being hard to understand and error-prone.  One example is hugepage
    handling.  There is a code to compute hugepage offset correctly in
    try_to_unmap_file(), but, there isn't a code to compute hugepage offset
    in rmap_walk_file().  These are used pairwise in migration context, but
    we missed to modify pairwise.
    
    To overcome these drawbacks, we should unify these through one unified
    function.  I decide rmap_walk() as main function since it has no
    unnecessity.  And to control behavior of rmap_walk(), I introduce struct
    rmap_walk_control having some function pointers.  These makes
    rmap_walk() working for their specific needs.
    
    This patchset remove a lot of duplicated code as you can see in below
    short-stat and kernel text size also decrease slightly.
    
       text    data     bss     dec     hex filename
      10640       1      16   10657    29a1 mm/rmap.o
      10047       1      16   10064    2750 mm/rmap.o
    
      13823     705    8288   22816    5920 mm/ksm.o
      13199     705    8288   22192    56b0 mm/ksm.o
    
    This patch (of 9):
    
    We have to recompute pgoff if the given page is huge, since result based
    on HPAGE_SIZE is not approapriate for scanning the vma interval tree, as
    shown by commit 36e4f20a ("hugetlb: do not use
    vma_hugecache_offset() for vma_prio_tree_foreach") and commit 369a713e
    
    
    ("rmap: recompute pgoff for unmapping huge page").
    
    To handle both the cases, normal page for page cache and hugetlb page,
    by same way, we can use compound_page().  It returns 0 on non-compound
    page and it also returns proper value on compound page.
    
    Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Ingo Molnar <mingo@kernel.org>
    Cc: Hillf Danton <dhillf@gmail.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    b854f711