Commit 21333b2b authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds
Browse files

ksm: no debug in page_dup_rmap()

page_dup_rmap(), used on each mapped page when forking, was originally
just an inline atomic_inc of mapcount.  2.6.22 added CONFIG_DEBUG_VM
out-of-line checks to it, which would need to be ever-so-slightly
complicated to allow for the PageKsm() we're about to define.

But I think these checks never caught anything.  And if it's coding errors
we're worried about, such checks should be in page_remove_rmap() too, not
just when forking; whereas if it's pagetable corruption we're worried
about, then they shouldn't be limited to CONFIG_DEBUG_VM.

Oh, just revert page_dup_rmap() to an inline atomic_inc of mapcount.
Signed-off-by: default avatarHugh Dickins <>
Signed-off-by: default avatarChris Wright <>
Signed-off-by: default avatarIzik Eidus <>
Cc: Nick Piggin <>
Cc: Andrea Arcangeli <>
Cc: Rik van Riel <>
Cc: Wu Fengguang <>
Cc: Balbir Singh <>
Cc: Hugh Dickins <>
Cc: KAMEZAWA Hiroyuki <>
Cc: Lee Schermerhorn <>
Cc: Avi Kivity <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent f8af4da3
......@@ -71,14 +71,10 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned lon
void page_add_file_rmap(struct page *);
void page_remove_rmap(struct page *);
void page_dup_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address);
static inline void page_dup_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address)
static inline void page_dup_rmap(struct page *page)
* Called from mm/vmscan.c to handle paging out
......@@ -597,7 +597,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
page = vm_normal_page(vma, addr, pte);
if (page) {
page_dup_rmap(page, vma, addr);
......@@ -710,27 +710,6 @@ void page_add_file_rmap(struct page *page)
* page_dup_rmap - duplicate pte mapping to a page
* @page: the page to add the mapping to
* @vma: the vm area being duplicated
* @address: the user virtual address mapped
* For copy_page_range only: minimal extract from page_add_file_rmap /
* page_add_anon_rmap, avoiding unnecessary tests (already checked) so it's
* quicker.
* The caller needs to hold the pte lock.
void page_dup_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address)
if (PageAnon(page))
__page_check_anon_rmap(page, vma, address);
* page_remove_rmap - take down pte mapping from a page
* @page: page to remove mapping from
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment