Commit 3a1086fb authored by Vlastimil Babka's avatar Vlastimil Babka Committed by Linus Torvalds
Browse files

mm: always steal split buddies in fallback allocations

When allocation falls back to another migratetype, it will steal a page
with highest available order, and (depending on this order and desired
migratetype), it might also steal the rest of free pages from the same

Given the preference of highest available order, it is likely that it will
be higher than the desired order, and result in the stolen buddy page
being split.  The remaining pages after split are currently stolen only
when the rest of the free pages are stolen.  This can however lead to
situations where for MOVABLE allocations we split e.g.  order-4 fallback
UNMOVABLE page, but steal only order-0 page.  Then on the next MOVABLE
allocation (which may be batched to fill the pcplists) we split another
order-3 or higher page, etc.  By stealing all pages that we have split, we
can avoid further stealing.

This patch therefore adjusts the page stealing so that buddy pages created
by split are always stolen.  This has effect only on MOVABLE allocations,
as RECLAIMABLE and UNMOVABLE allocations already always do that in
addition to stealing the rest of free pages from the pageblock.  The
change also allows to simplify try_to_steal_freepages() and factor out CMA

According to Mel, it has been intended since the beginning that buddy
pages after split would be stolen always, but it doesn't seem like it was
ever the case until commit 47118af0 ("mm: mmzone: MIGRATE_CMA
migration type added").  The commit has unintentionally introduced this
behavior, but was reverted by commit 0cbef29a

__rmqueue_fallback() should respect pageblock type").  Neither included

My evaluation with stress-highalloc from mmtests shows about 2.5x
reduction of page stealing events for MOVABLE allocations, without
affecting the page stealing events for other allocation migratetypes.
Signed-off-by: default avatarVlastimil Babka <>
Acked-by: default avatarMel Gorman <>
Cc: Zhang Yanfei <>
Acked-by: default avatarMinchan Kim <>
Cc: David Rientjes <>
Cc: Rik van Riel <>
Cc: "Aneesh Kumar K.V" <>
Cc: "Kirill A. Shutemov" <>
Cc: Johannes Weiner <>
Cc: Joonsoo Kim <>
Cc: Michal Hocko <>
Cc: KOSAKI Motohiro <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 99592d59
......@@ -1125,33 +1125,18 @@ static void change_pageblock_range(struct page *pageblock_page,
* If breaking a large block of pages, move all free pages to the preferred
* allocation list. If falling back for a reclaimable kernel allocation, be
* more aggressive about taking ownership of free pages.
* On the other hand, never change migration type of MIGRATE_CMA pageblocks
* nor move CMA pages to different free lists. We don't want unmovable pages
* to be allocated from MIGRATE_CMA areas.
* Returns the allocation migratetype if free pages were stolen, or the
* fallback migratetype if it was decided not to steal.
* more aggressive about taking ownership of free pages. If we claim more than
* half of the pageblock, change pageblock's migratetype as well.
static int try_to_steal_freepages(struct zone *zone, struct page *page,
static void try_to_steal_freepages(struct zone *zone, struct page *page,
int start_type, int fallback_type)
int current_order = page_order(page);
* When borrowing from MIGRATE_CMA, we need to release the excess
* buddy pages to CMA itself. We also ensure the freepage_migratetype
* is set to CMA so it is returned to the correct freelist in case
* the page ends up being not actually allocated from the pcp lists.
if (is_migrate_cma(fallback_type))
return fallback_type;
/* Take ownership for orders >= pageblock_order */
if (current_order >= pageblock_order) {
change_pageblock_range(page, current_order, start_type);
return start_type;
if (current_order >= pageblock_order / 2 ||
......@@ -1165,11 +1150,7 @@ static int try_to_steal_freepages(struct zone *zone, struct page *page,
if (pages >= (1 << (pageblock_order-1)) ||
set_pageblock_migratetype(page, start_type);
return start_type;
return fallback_type;
/* Remove an element from the buddy allocator from the fallback list */
......@@ -1179,14 +1160,15 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
struct free_area *area;
unsigned int current_order;
struct page *page;
int migratetype, new_type, i;
/* Find the largest possible block of pages in the other list */
for (current_order = MAX_ORDER-1;
current_order >= order && current_order <= MAX_ORDER-1;
--current_order) {
int i;
for (i = 0;; i++) {
migratetype = fallbacks[start_migratetype][i];
int migratetype = fallbacks[start_migratetype][i];
int buddy_type = start_migratetype;
/* MIGRATE_RESERVE handled later if necessary */
if (migratetype == MIGRATE_RESERVE)
......@@ -1200,22 +1182,36 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
struct page, lru);
new_type = try_to_steal_freepages(zone, page,
if (!is_migrate_cma(migratetype)) {
try_to_steal_freepages(zone, page,
} else {
* When borrowing from MIGRATE_CMA, we need to
* release the excess buddy pages to CMA
* itself, and we do not try to steal extra
* free pages.
buddy_type = migratetype;
/* Remove the page from the freelists */
expand(zone, page, order, current_order, area,
/* The freepage_migratetype may differ from pageblock's
* The freepage_migratetype may differ from pageblock's
* migratetype depending on the decisions in
* try_to_steal_freepages. This is OK as long as it does
* not differ for MIGRATE_CMA type.
* try_to_steal_freepages(). This is OK as long as it
* does not differ for MIGRATE_CMA pageblocks. For CMA
* we need to make sure unallocated pages flushed from
* pcp lists are returned to the correct freelist.
set_freepage_migratetype(page, new_type);
set_freepage_migratetype(page, buddy_type);
trace_mm_page_alloc_extfrag(page, order, current_order,
start_migratetype, migratetype);
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment