Commit c78e9363 authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds
Browse files

mm: do not walk all of system memory during show_mem

It has been reported on very large machines that show_mem is taking almost
5 minutes to display information.  This is a serious problem if there is
an OOM storm.  The bulk of the cost is in show_mem doing a very expensive
PFN walk to give us the following information

  Total RAM:       Also available as totalram_pages
  Highmem pages:   Also available as totalhigh_pages
  Reserved pages:  Can be inferred from the zone structure
  Shared pages:    PFN walk required
  Unshared pages:  PFN walk required
  Quick pages:     Per-cpu walk required

Only the shared/unshared pages requires a full PFN walk but that
information is useless.  It is also inaccurate as page pins of unshared
pages would be accounted for as shared.  Even if the information was
accurate, I'm struggling to think how the shared/unshared information
could be useful for debugging OOM conditions.  Maybe it was useful before
rmap existed when reclaiming shared pages was costly but it is less
relevant today.

The PFN walk could be optimised a bit but why bother as the information is
useless.  This patch deletes the PFN walker and infers the total RAM,
highmem and reserved pages count from struct zone.  It omits the
shared/unshared page usage on the grounds that it is useless.  It also
corrects the reporting of HighMem as HighMem/MovableOnly as ZONE_MOVABLE
has similar problems to HighMem with respect to lowmem/highmem exhaustion.

Signed-off-by: default avatarMel Gorman <>
Cc: David Rientjes <>
Acked-by: default avatarKOSAKI Motohiro <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 4a099fb4
......@@ -12,8 +12,7 @@
void show_mem(unsigned int filter)
pg_data_t *pgdat;
unsigned long total = 0, reserved = 0, shared = 0,
nonshared = 0, highmem = 0;
unsigned long total = 0, reserved = 0, highmem = 0;
......@@ -22,43 +21,27 @@ void show_mem(unsigned int filter)
for_each_online_pgdat(pgdat) {
unsigned long i, flags;
unsigned long flags;
int zoneid;
pgdat_resize_lock(pgdat, &flags);
for (i = 0; i < pgdat->node_spanned_pages; i++) {
struct page *page;
unsigned long pfn = pgdat->node_start_pfn + i;
if (unlikely(!(i % MAX_ORDER_NR_PAGES)))
if (!pfn_valid(pfn))
for (zoneid = 0; zoneid < MAX_NR_ZONES; zoneid++) {
struct zone *zone = &pgdat->node_zones[zoneid];
if (!populated_zone(zone))
page = pfn_to_page(pfn);
if (PageHighMem(page))
total += zone->present_pages;
reserved = zone->present_pages - zone->managed_pages;
if (PageReserved(page))
else if (page_count(page) == 1)
else if (page_count(page) > 1)
shared += page_count(page) - 1;
if (is_highmem_idx(zoneid))
highmem += zone->present_pages;
pgdat_resize_unlock(pgdat, &flags);
printk("%lu pages RAM\n", total);
printk("%lu pages HighMem\n", highmem);
printk("%lu pages HighMem/MovableOnly\n", highmem);
printk("%lu pages reserved\n", reserved);
printk("%lu pages shared\n", shared);
printk("%lu pages non-shared\n", nonshared);
printk("%lu pages in pagetable cache\n",
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment