lkml.org 
[lkml]   [2021]   [Jun]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH mmotm v1] mm/hwpoison: disable pcp for page_handle_poison()
Date
From: Naoya Horiguchi <naoya.horiguchi@nec.com>

Recent changes by patch "mm/page_alloc: allow high-order pages to be
stored on the per-cpu lists" makes kernels determine whether to use pcp
by pcp_allowed_order(), which breaks soft-offline for hugetlb pages.

Soft-offline dissolves a migration source page, then removes it from
buddy free list, so it's assumed that any subpage of the soft-offlined
hugepage are recognized as a buddy page just after returning from
dissolve_free_huge_page(). pcp_allowed_order() returns true for
hugetlb, so this assumption is no longer true.

So disable pcp during dissolve_free_huge_page() and
take_page_off_buddy() to prevent soft-offlined hugepages from linking to
pcp lists. Soft-offline should not be common events so the impact on
performance should be minimal. And I think that the optimization of
Mel's patch could benefit to hugetlb so zone_pcp_disable() is called
only in hwpoison context.

Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
---
mm/memory-failure.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)

diff --git v5.13-rc6-mmotm-2021-06-15-20-24/mm/memory-failure.c v5.13-rc6-mmotm-2021-06-15-20-24_patched/mm/memory-failure.c
index 1842822a10da..593079766655 100644
--- v5.13-rc6-mmotm-2021-06-15-20-24/mm/memory-failure.c
+++ v5.13-rc6-mmotm-2021-06-15-20-24_patched/mm/memory-failure.c
@@ -66,6 +66,19 @@ int sysctl_memory_failure_recovery __read_mostly = 1;

atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);

+static bool __page_handle_poison(struct page *page)
+{
+ bool ret;
+
+ zone_pcp_disable(page_zone(page));
+ ret = dissolve_free_huge_page(page);
+ if (!ret)
+ ret = take_page_off_buddy(page);
+ zone_pcp_enable(page_zone(page));
+
+ return ret;
+}
+
static bool page_handle_poison(struct page *page, bool hugepage_or_freepage, bool release)
{
if (hugepage_or_freepage) {
@@ -73,7 +86,7 @@ static bool page_handle_poison(struct page *page, bool hugepage_or_freepage, boo
* Doing this check for free pages is also fine since dissolve_free_huge_page
* returns 0 for non-hugetlb pages as well.
*/
- if (dissolve_free_huge_page(page) || !take_page_off_buddy(page))
+ if (!__page_handle_poison(page))
/*
* We could fail to take off the target page from buddy
* for example due to racy page allocation, but that's
@@ -986,7 +999,7 @@ static int me_huge_page(struct page *p, unsigned long pfn)
*/
if (PageAnon(hpage))
put_page(hpage);
- if (!dissolve_free_huge_page(p) && take_page_off_buddy(p)) {
+ if (__page_handle_poison(p)) {
page_ref_inc(p);
res = MF_RECOVERED;
}
@@ -1441,7 +1454,7 @@ static int memory_failure_hugetlb(unsigned long pfn, int flags)
res = get_hwpoison_page(p, flags);
if (!res) {
res = MF_FAILED;
- if (!dissolve_free_huge_page(p) && take_page_off_buddy(p)) {
+ if (__page_handle_poison(p)) {
page_ref_inc(p);
res = MF_RECOVERED;
}
--
2.25.1
\
 
 \ /
  Last update: 2021-06-17 11:27    [W:0.057 / U:0.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site