lkml.org 
[lkml]   [2020]   [Feb]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    SubjectRe: [PATCH v3 07/12] mm/gup: track FOLL_PIN pages
    From
    Date
    On 2/3/20 5:40 AM, Kirill A. Shutemov wrote:
    > On Fri, Jan 31, 2020 at 07:40:24PM -0800, John Hubbard wrote:
    >> @@ -4405,7 +4392,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
    >> same_page:
    >> if (pages) {
    >> pages[i] = mem_map_offset(page, pfn_offset);
    >> - get_page(pages[i]);
    >> + if (!try_grab_page(pages[i], flags)) {
    >> + spin_unlock(ptl);
    >> + remainder = 0;
    >> + err = -ENOMEM;
    >> + WARN_ON_ONCE(1);
    >
    > The WARN_ON_ONCE deserve a comment. And I guess you can put it into 'if'
    > condition.

    OK, I've changed it to the following, which I *think* is an accurate comment, but
    I'm still a bit new to huge pages:

    if (pages) {
    pages[i] = mem_map_offset(page, pfn_offset);
    /*
    * try_grab_page() should always succeed here, because:
    * a) we hold the ptl lock, and b) we've just checked
    * that the huge page is present in the page tables. If
    * the huge page is present, then the tail pages must
    * also be present. The ptl prevents the head page and
    * tail pages from being rearranged in any way. So this
    * page must be available at this point, unless the page
    * refcount overflowed:
    */
    if (WARN_ON_ONCE(!try_grab_page(pages[i], flags))) {
    spin_unlock(ptl);
    remainder = 0;
    err = -ENOMEM;
    break;
    }
    }


    >
    >> + break;
    >> + }
    >> }
    >>
    >> if (vmas)
    >> @@ -4965,6 +4958,12 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
    >> struct page *page = NULL;
    >> spinlock_t *ptl;
    >> pte_t pte;
    >> +
    >> + /* FOLL_GET and FOLL_PIN are mutually exclusive. */
    >> + if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
    >> + (FOLL_PIN | FOLL_GET)))
    >> + return NULL;
    >> +
    >> retry:
    >> ptl = pmd_lockptr(mm, pmd);
    >> spin_lock(ptl);
    >> @@ -4977,8 +4976,11 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
    >> pte = huge_ptep_get((pte_t *)pmd);
    >> if (pte_present(pte)) {
    >> page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
    >> - if (flags & FOLL_GET)
    >> - get_page(page);
    >> + if (unlikely(!try_grab_page(page, flags))) {
    >> + WARN_ON_ONCE(1);
    >
    > Ditto.


    OK, I've added a similar comment as the one above. Now it looks like this:

    if (pte_present(pte)) {
    page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
    /*
    * try_grab_page() should always succeed here, because: a) we
    * hold the pmd (ptl) lock, and b) we've just checked that the
    * huge pmd (head) page is present in the page tables. The ptl
    * prevents the head page and tail pages from being rearranged
    * in any way. So this page must be available at this point,
    * unless the page refcount overflowed:
    */
    if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
    page = NULL;
    goto out;
    }



    thanks,
    --
    John Hubbard
    NVIDIA

    >
    >> + page = NULL;
    >> + goto out;
    >> + }
    >> } else {
    >> if (is_hugetlb_entry_migration(pte)) {
    >> spin_unlock(ptl);
    >

    \
     
     \ /
      Last update: 2020-02-03 22:02    [W:3.259 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site