Skip to content
  • Michal Hocko's avatar
    mm, oom, compaction: prevent from should_compact_retry looping for ever for costly orders · 86a294a8
    Michal Hocko authored
    "mm: consider compaction feedback also for costly allocation" has
    removed the upper bound for the reclaim/compaction retries based on the
    number of reclaimed pages for costly orders.  While this is desirable
    the patch did miss a mis interaction between reclaim, compaction and the
    retry logic.  The direct reclaim tries to get zones over min watermark
    while compaction backs off and returns COMPACT_SKIPPED when all zones
    are below low watermark + 1<<order gap.  If we are getting really close
    to OOM then __compaction_suitable can keep returning COMPACT_SKIPPED a
    high order request (e.g.  hugetlb order-9) while the reclaim is not able
    to release enough pages to get us over low watermark.  The reclaim is
    still able to make some progress (usually trashing over few remaining
    pages) so we are not able to break out from the loop.
    
    I have seen this happening with the same test described in "mm: consider
    compaction feedback also for costly allocation" on a swapless system.
    The original problem got resolved by "vmscan: consider classzone_idx in
    compaction_ready" but it shows how things might go wrong when we
    approach the oom event horizont.
    
    The reason why compaction requires being over low rather than min
    watermark is not clear to me.  This check was there essentially since
    56de7263
    
     ("mm: compaction: direct compact when a high-order
    allocation fails").  It is clearly an implementation detail though and
    we shouldn't pull it into the generic retry logic while we should be
    able to cope with such eventuality.  The only place in
    should_compact_retry where we retry without any upper bound is for
    compaction_withdrawn() case.
    
    Introduce compaction_zonelist_suitable function which checks the given
    zonelist and returns true only if there is at least one zone which would
    would unblock __compaction_suitable if more memory got reclaimed.  In
    this implementation it checks __compaction_suitable with NR_FREE_PAGES
    plus part of the reclaimable memory as the target for the watermark
    check.  The reclaimable memory is reduced linearly by the allocation
    order.  The idea is that we do not want to reclaim all the remaining
    memory for a single allocation request just unblock
    __compaction_suitable which doesn't guarantee we will make a further
    progress.
    
    The new helper is then used if compaction_withdrawn() feedback was
    provided so we do not retry if there is no outlook for a further
    progress.  !costly requests shouldn't be affected much - e.g.  order-2
    pages would require to have at least 64kB on the reclaimable LRUs while
    order-9 would need at least 32M which should be enough to not lock up.
    
    [vbabka@suse.cz: fix classzone_idx vs. high_zoneidx usage in compaction_zonelist_suitable]
    [akpm@linux-foundation.org: fix it for Mel's mm-page_alloc-remove-field-from-alloc_context.patch]
    Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
    Acked-by: default avatarHillf Danton <hillf.zj@alibaba-inc.com>
    Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Joonsoo Kim <js1304@gmail.com>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
    Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    86a294a8