Commit db71daab authored by Haavard Skinnemoen's avatar Haavard Skinnemoen Committed by Linus Torvalds
Browse files

[PATCH] Generic ioremap_page_range: flush_cache_vmap

The existing implementation of ioremap_page_range(), which was taken
from i386, does this:

	/* modify page tables */

I think this is a bit defensive, so this patch changes the generic
implementation to do:

	/* modify page tables */
	flush_cache_vmap(start, end);

instead, which is similar to what vmalloc() does. This should still
be correct because we never modify existing PTEs. According to
James Bottomley:

The problem the flush_tlb_all() is trying to solve is to avoid stale tlb
entries in the ioremap area.  We're just being conservative by flushing
on both map and unmap.  Technically what vmalloc/vfree does (only flush
the tlb on unmap) is just fine because it means that the only tlb
entries in the remap area must belong to in-use mappings.
Signed-off-by: default avatarHaavard Skinnemoen <>
Cc: Richard Henderson <>
Cc: Ivan Kokshaysky <>
Cc: Russell King <>
Cc: Mikael Starvik <>
Cc: Andi Kleen <>
Cc: <>
Cc: Ralf Baechle <>
Cc: Kyle McMartin <>
Cc: Martin Schwidefsky <>
Cc: Paul Mundt <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 74588d8b
......@@ -76,8 +76,6 @@ int ioremap_page_range(unsigned long addr,
BUG_ON(addr >= end);
start = addr;
phys_addr -= addr;
pgd = pgd_offset_k(addr);
......@@ -88,7 +86,7 @@ int ioremap_page_range(unsigned long addr,
} while (pgd++, addr = next, addr != end);
flush_cache_vmap(start, end);
return err;
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment