lkml.org 
[lkml]   [2021]   [Jan]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    SubjectRe: [PATCH v2 3/6] mm: hugetlb: fix a race between freeing and dissolving the page
    From
    Date
    On 1/7/21 12:40 AM, Michal Hocko wrote:
    > On Wed 06-01-21 12:58:29, Mike Kravetz wrote:
    >> On 1/6/21 8:56 AM, Michal Hocko wrote:
    >>> On Wed 06-01-21 16:47:36, Muchun Song wrote:
    >>>> There is a race condition between __free_huge_page()
    >>>> and dissolve_free_huge_page().
    >>>>
    >>>> CPU0: CPU1:
    >>>>
    >>>> // page_count(page) == 1
    >>>> put_page(page)
    >>>> __free_huge_page(page)
    >>>> dissolve_free_huge_page(page)
    >>>> spin_lock(&hugetlb_lock)
    >>>> // PageHuge(page) && !page_count(page)
    >>>> update_and_free_page(page)
    >>>> // page is freed to the buddy
    >>>> spin_unlock(&hugetlb_lock)
    >>>> spin_lock(&hugetlb_lock)
    >>>> clear_page_huge_active(page)
    >>>> enqueue_huge_page(page)
    >>>> // It is wrong, the page is already freed
    >>>> spin_unlock(&hugetlb_lock)
    >>>>
    >>>> The race windows is between put_page() and spin_lock() which
    >>>> is in the __free_huge_page().
    >>>
    >>> The race window reall is between put_page and dissolve_free_huge_page.
    >>> And the result is that the put_page path would clobber an unrelated page
    >>> (either free or already reused page) which is quite serious.
    >>> Fortunatelly pages are dissolved very rarely. I believe that user would
    >>> require to be privileged to hit this by intention.
    >>>
    >>>> We should make sure that the page is already on the free list
    >>>> when it is dissolved.
    >>>
    >>> Another option would be to check for PageHuge in __free_huge_page. Have
    >>> you considered that rather than add yet another state? The scope of the
    >>> spinlock would have to be extended. If that sounds more tricky then can
    >>> we check the page->lru in the dissolve path? If the page is still
    >>> PageHuge and reference count 0 then there shouldn't be many options
    >>> where it can be queued, right?
    >>
    >> The tricky part with expanding lock scope will be the potential call to
    >> hugepage_subpool_put_pages as it may also try to acquire the hugetlb_lock.
    >
    > Can we rearrange the code and move hugepage_subpool_put_pages after all
    > this is done? Or is there any strong reason for the particular ordering?

    The reservation code is so fragile, I always get nervous when making
    any changes. However, the straight forward patch below passes some
    simple testing. The only difference I can see is that global counts
    are adjusted before sub-pool counts. This should not be an issue as
    global and sub-pool counts are adjusted independently (not under the
    same lock). Allocation code checks sub-pool counts before global
    counts. So, there is a SMALL potential that a racing allocation which
    previously succeeded would now fail. I do not think this is an issue
    in practice.

    diff --git a/mm/hugetlb.c b/mm/hugetlb.c
    index 3b38ea958e95..658593840212 100644
    --- a/mm/hugetlb.c
    +++ b/mm/hugetlb.c
    @@ -1395,6 +1395,11 @@ static void __free_huge_page(struct page *page)
    (struct hugepage_subpool *)page_private(page);
    bool restore_reserve;

    + spin_lock(&hugetlb_lock);
    + /* check for race with dissolve_free_huge_page/update_and_free_page */
    + if (!PageHuge(page))
    + return;
    +
    VM_BUG_ON_PAGE(page_count(page), page);
    VM_BUG_ON_PAGE(page_mapcount(page), page);

    @@ -1403,26 +1408,6 @@ static void __free_huge_page(struct page *page)
    restore_reserve = PagePrivate(page);
    ClearPagePrivate(page);

    - /*
    - * If PagePrivate() was set on page, page allocation consumed a
    - * reservation. If the page was associated with a subpool, there
    - * would have been a page reserved in the subpool before allocation
    - * via hugepage_subpool_get_pages(). Since we are 'restoring' the
    - * reservtion, do not call hugepage_subpool_put_pages() as this will
    - * remove the reserved page from the subpool.
    - */
    - if (!restore_reserve) {
    - /*
    - * A return code of zero implies that the subpool will be
    - * under its minimum size if the reservation is not restored
    - * after page is free. Therefore, force restore_reserve
    - * operation.
    - */
    - if (hugepage_subpool_put_pages(spool, 1) == 0)
    - restore_reserve = true;
    - }
    -
    - spin_lock(&hugetlb_lock);
    clear_page_huge_active(page);
    hugetlb_cgroup_uncharge_page(hstate_index(h),
    pages_per_huge_page(h), page);
    @@ -1446,6 +1431,28 @@ static void __free_huge_page(struct page *page)
    enqueue_huge_page(h, page);
    }
    spin_unlock(&hugetlb_lock);
    +
    + /*
    + * If PagePrivate() was set on page, page allocation consumed a
    + * reservation. If the page was associated with a subpool, there
    + * would have been a page reserved in the subpool before allocation
    + * via hugepage_subpool_get_pages(). Since we are 'restoring' the
    + * reservtion, do not call hugepage_subpool_put_pages() as this will
    + * remove the reserved page from the subpool.
    + */
    + if (!restore_reserve) {
    + /*
    + * A return code of zero implies that the subpool will be
    + * under its minimum size if the reservation is not restored
    + * after page is free. Therefore, we need to add 1 to the
    + * global reserve count.
    + */
    + if (hugepage_subpool_put_pages(spool, 1) == 0) {
    + spin_lock(&hugetlb_lock);
    + h->resv_huge_pages++;
    + spin_unlock(&hugetlb_lock);
    + }
    + }
    }

    /*
    \
     
     \ /
      Last update: 2021-01-08 01:55    [W:8.600 / U:0.196 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site