Skip to content
  • Adam Litke's avatar
    [PATCH] hugetlb: move stale pte check into huge_pte_alloc() · 7bf07f3d
    Adam Litke authored
    Initial Post (Wed, 17 Aug 2005)
    
    This patch moves the
    	if (! pte_none(*pte))
    		hugetlb_clean_stale_pgtable(pte);
    logic into huge_pte_alloc() so all of its callers can be immune to the bug
    described by Kenneth Chen at http://lkml.org/lkml/2004/6/16/246
    
    
    
    > It turns out there is a bug in hugetlb_prefault(): with 3 level page table,
    > huge_pte_alloc() might return a pmd that points to a PTE page. It happens
    > if the virtual address for hugetlb mmap is recycled from previously used
    > normal page mmap. free_pgtables() might not scrub the pmd entry on
    > munmap and hugetlb_prefault skips on any pmd presence regardless what type
    > it is.
    
    Unless I am missing something, it seems more correct to place the check inside
    huge_pte_alloc() to prevent a the same bug wherever a huge pte is allocated.
    It also allows checking for this condition when lazily faulting huge pages
    later in the series.
    
    Signed-off-by: default avatarAdam Litke <agl@us.ibm.com>
    Cc: <linux-mm@kvack.org>
    Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
    7bf07f3d