lkml.org 
[lkml]   [2020]   [Nov]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [External] Re: [PATCH v5 15/21] mm/hugetlb: Set the PageHWPoison to the raw error page
On Fri, Nov 20, 2020 at 4:19 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Fri 20-11-20 14:43:19, Muchun Song wrote:
> > Because we reuse the first tail page, if we set PageHWPosion on a
> > tail page. It indicates that we may set PageHWPoison on a series
> > of pages. So we can use the head[4].mapping to record the real
> > error page index and set the raw error page PageHWPoison later.
>
> This really begs more explanation. Maybe I misremember but If there
> is a HWPoison hole in a hugepage then the whole page is demolished, no?
> If that is the case then why do we care about tail pages?

It seems like that I should make the commit log more clear. If there is
a HWPoison hole in a HugeTLB, we should dissolve the HugeTLB page.
It means that we set the HWPoison on the raw error page(not the head
page) and free the HugeTLB to the buddy allocator. Then we will remove
only one HWPoison page from the buddy free list. You can see the
take_page_off_buddy() for more details. Thanks.

>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> > mm/hugetlb.c | 11 +++--------
> > mm/hugetlb_vmemmap.h | 39 +++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 42 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 055604d07046..b853aacd5c16 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1383,6 +1383,7 @@ static void __free_hugepage(struct hstate *h, struct page *page)
> > int i;
> >
> > alloc_huge_page_vmemmap(h, page);
> > + subpage_hwpoison_deliver(page);
> >
> > for (i = 0; i < pages_per_huge_page(h); i++) {
> > page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
> > @@ -1944,14 +1945,8 @@ int dissolve_free_huge_page(struct page *page)
> > int nid = page_to_nid(head);
> > if (h->free_huge_pages - h->resv_huge_pages == 0)
> > goto out;
> > - /*
> > - * Move PageHWPoison flag from head page to the raw error page,
> > - * which makes any subpages rather than the error page reusable.
> > - */
> > - if (PageHWPoison(head) && page != head) {
> > - SetPageHWPoison(page);
> > - ClearPageHWPoison(head);
> > - }
> > +
> > + set_subpage_hwpoison(head, page);
> > list_del(&head->lru);
> > h->free_huge_pages--;
> > h->free_huge_pages_node[nid]--;
> > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> > index 779d3cb9333f..65e94436ffff 100644
> > --- a/mm/hugetlb_vmemmap.h
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -20,6 +20,29 @@ void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m,
> > void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
> > void free_huge_page_vmemmap(struct hstate *h, struct page *head);
> >
> > +static inline void subpage_hwpoison_deliver(struct page *head)
> > +{
> > + struct page *page = head;
> > +
> > + if (PageHWPoison(head))
> > + page = head + page_private(head + 4);
> > +
> > + /*
> > + * Move PageHWPoison flag from head page to the raw error page,
> > + * which makes any subpages rather than the error page reusable.
> > + */
> > + if (page != head) {
> > + SetPageHWPoison(page);
> > + ClearPageHWPoison(head);
> > + }
> > +}
> > +
> > +static inline void set_subpage_hwpoison(struct page *head, struct page *page)
> > +{
> > + if (PageHWPoison(head))
> > + set_page_private(head + 4, page - head);
> > +}
> > +
> > static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
> > {
> > return h->nr_free_vmemmap_pages;
> > @@ -56,6 +79,22 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> > {
> > }
> >
> > +static inline void subpage_hwpoison_deliver(struct page *head)
> > +{
> > +}
> > +
> > +static inline void set_subpage_hwpoison(struct page *head, struct page *page)
> > +{
> > + /*
> > + * Move PageHWPoison flag from head page to the raw error page,
> > + * which makes any subpages rather than the error page reusable.
> > + */
> > + if (PageHWPoison(head) && page != head) {
> > + SetPageHWPoison(page);
> > + ClearPageHWPoison(head);
> > + }
> > +}
> > +
> > static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
> > {
> > return 0;
> > --
> > 2.11.0
> >
>
> --
> Michal Hocko
> SUSE Labs



--
Yours,
Muchun

\
 
 \ /
  Last update: 2020-11-20 11:34    [W:0.100 / U:0.584 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site