Commit 72853e29 authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds
Browse files

mm: page allocator: update free page counters after pages are placed on the free list

When allocating a page, the system uses NR_FREE_PAGES counters to
determine if watermarks would remain intact after the allocation was made.
This check is made without interrupts disabled or the zone lock held and
so is race-prone by nature.  Unfortunately, when pages are being freed in
batch, the counters are updated before the pages are added on the list.
During this window, the counters are misleading as the pages do not exist
yet.  When under significant pressure on systems with large numbers of
CPUs, it's possible for processes to make progress even though they should
have been stalled.  This is particularly problematic if a number of the
processes are using GFP_ATOMIC as the min watermark can be accidentally
breached and in extreme cases, the system can livelock.

This patch updates the counters after the pages have been added to the
list.  This makes the allocator more cautious with respect to preserving
the watermarks and mitigates livelock possibilities.

[ avoid modifying incoming args]
Signed-off-by: default avatarMel Gorman <>
Reviewed-by: default avatarRik van Riel <>
Reviewed-by: default avatarMinchan Kim <>
Reviewed-by: default avatarKAMEZAWA Hiroyuki <>
Reviewed-by: default avatarChristoph Lameter <>
Reviewed-by: default avatarKOSAKI Motohiro <>
Acked-by: default avatarJohannes Weiner <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 5ee28a44
......@@ -588,13 +588,13 @@ static void free_pcppages_bulk(struct zone *zone, int count,
int migratetype = 0;
int batch_free = 0;
int to_free = count;
zone->all_unreclaimable = 0;
zone->pages_scanned = 0;
__mod_zone_page_state(zone, NR_FREE_PAGES, count);
while (count) {
while (to_free) {
struct page *page;
struct list_head *list;
......@@ -619,8 +619,9 @@ static void free_pcppages_bulk(struct zone *zone, int count,
__free_one_page(page, zone, 0, page_private(page));
trace_mm_page_pcpu_drain(page, 0, page_private(page));
} while (--count && --batch_free && !list_empty(list));
} while (--to_free && --batch_free && !list_empty(list));
__mod_zone_page_state(zone, NR_FREE_PAGES, count);
......@@ -631,8 +632,8 @@ static void free_one_page(struct zone *zone, struct page *page, int order,
zone->all_unreclaimable = 0;
zone->pages_scanned = 0;
__mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
__free_one_page(page, zone, order, migratetype);
__mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment