lkml.org 
[lkml]   [2021]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v2 02/24] mm: Clear vmf->pte after pte_unmap_same() returns
    Date
    pte_unmap_same() will always unmap the pte pointer.  After the unmap, vmf->pte
    will not be valid any more. We should clear it.

    It was safe only because no one is accessing vmf->pte after pte_unmap_same()
    returns, since the only caller of pte_unmap_same() (so far) is do_swap_page(),
    where vmf->pte will in most cases be overwritten very soon.

    pte_unmap_same() will be used in other places in follow up patches, so that
    vmf->pte will not always be re-written. This patch enables us to call
    functions like finish_fault() because that'll conditionally unmap the pte by
    checking vmf->pte first. Or, alloc_set_pte() will make sure to allocate a new
    pte even after calling pte_unmap_same().

    Since we'll need to modify vmf->pte, directly pass in vmf into pte_unmap_same()
    and then we can also avoid the long parameter list.

    Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
    Signed-off-by: Peter Xu <peterx@redhat.com>
    ---
    mm/memory.c | 13 +++++++------
    1 file changed, 7 insertions(+), 6 deletions(-)

    diff --git a/mm/memory.c b/mm/memory.c
    index ffda19542bc6d..955a0bb6b855c 100644
    --- a/mm/memory.c
    +++ b/mm/memory.c
    @@ -2618,19 +2618,20 @@ EXPORT_SYMBOL_GPL(apply_to_existing_page_range);
    * proceeding (but do_wp_page is only called after already making such a check;
    * and do_anonymous_page can safely check later on).
    */
    -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
    - pte_t *page_table, pte_t orig_pte)
    +static inline int pte_unmap_same(struct vm_fault *vmf)
    {
    int same = 1;
    #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION)
    if (sizeof(pte_t) > sizeof(unsigned long)) {
    - spinlock_t *ptl = pte_lockptr(mm, pmd);
    + spinlock_t *ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
    spin_lock(ptl);
    - same = pte_same(*page_table, orig_pte);
    + same = pte_same(*vmf->pte, vmf->orig_pte);
    spin_unlock(ptl);
    }
    #endif
    - pte_unmap(page_table);
    + pte_unmap(vmf->pte);
    + /* After unmap of pte, the pointer is invalid now - clear it. */
    + vmf->pte = NULL;
    return same;
    }

    @@ -3319,7 +3320,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
    vm_fault_t ret = 0;
    void *shadow = NULL;

    - if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
    + if (!pte_unmap_same(vmf))
    goto out;

    entry = pte_to_swp_entry(vmf->orig_pte);
    --
    2.26.2
    \
     
     \ /
      Last update: 2021-04-27 18:19    [W:3.019 / U:0.172 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site