Commit 36e4f20a authored by Michal Hocko's avatar Michal Hocko Committed by Linus Torvalds
Browse files

hugetlb: do not use vma_hugecache_offset() for vma_prio_tree_foreach

Commit 0c176d52

 ("mm: hugetlb: fix pgoff computation when unmapping
page from vma") fixed pgoff calculation but it has replaced it by
vma_hugecache_offset() which is not approapriate for offsets used for
vma_prio_tree_foreach() because that one expects index in page units
rather than in huge_page_shift.

Johannes said:

: The resulting index may not be too big, but it can be too small: assume
: hpage size of 2M and the address to unmap to be 0x200000.  This is regular
: page index 512 and hpage index 1.  If you have a VMA that maps the file
: only starting at the second huge page, that VMAs vm_pgoff will be 512 but
: you ask for offset 1 and miss it even though it does map the page of
: interest.  hugetlb_cow() will try to unmap, miss the vma, and retry the
: cow until the allocation succeeds or the skipped vma(s) go away.
Signed-off-by: default avatarMichal Hocko <>
Acked-by: default avatarHillf Danton <>
Cc: Mel Gorman <>
Cc: KAMEZAWA Hiroyuki <>
Cc: Andrea Arcangeli <>
Cc: David Rientjes <>
Acked-by: default avatarJohannes Weiner <>
Cc: <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 027ef6c8
......@@ -2480,7 +2480,8 @@ static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
* from page cache lookup which is in HPAGE_SIZE units.
address = address & huge_page_mask(h);
pgoff = vma_hugecache_offset(h, vma, address);
pgoff = ((address - vma->vm_start) >> PAGE_SHIFT) +
mapping = vma->vm_file->f_dentry->d_inode->i_mapping;
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment