lkml.org 
[lkml]   [2024]   [Feb]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v3 02/10] mm/memory: handle !page case in zap_present_pte() separately
    Date
    We don't need uptodate accessed/dirty bits, so in theory we could
    replace ptep_get_and_clear_full() by an optimized ptep_clear_full()
    function. Let's rely on the provided pte.

    Further, there is no scenario where we would have to insert uffd-wp
    markers when zapping something that is not a normal page (i.e., zeropage).
    Add a sanity check to make sure this remains true.

    should_zap_folio() no longer has to handle NULL pointers. This change
    replaces 2/3 "!page/!folio" checks by a single "!page" one.

    Note that arch_check_zapped_pte() on x86-64 checks the HW-dirty bit to
    detect shadow stack entries. But for shadow stack entries, the HW dirty
    bit (in combination with non-writable PTEs) is set by software. So for the
    arch_check_zapped_pte() check, we don't have to sync against HW setting
    the HW dirty bit concurrently, it is always set.

    Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
    Signed-off-by: David Hildenbrand <david@redhat.com>
    ---
    mm/memory.c | 22 +++++++++++-----------
    1 file changed, 11 insertions(+), 11 deletions(-)

    diff --git a/mm/memory.c b/mm/memory.c
    index 5b0dc33133a6..4da6923709b2 100644
    --- a/mm/memory.c
    +++ b/mm/memory.c
    @@ -1497,10 +1497,6 @@ static inline bool should_zap_folio(struct zap_details *details,
    if (should_zap_cows(details))
    return true;

    - /* E.g. the caller passes NULL for the case of a zero folio */
    - if (!folio)
    - return true;
    -
    /* Otherwise we should only zap non-anon folios */
    return !folio_test_anon(folio);
    }
    @@ -1538,24 +1534,28 @@ static inline void zap_present_pte(struct mmu_gather *tlb,
    int *rss, bool *force_flush, bool *force_break)
    {
    struct mm_struct *mm = tlb->mm;
    - struct folio *folio = NULL;
    bool delay_rmap = false;
    + struct folio *folio;
    struct page *page;

    page = vm_normal_page(vma, addr, ptent);
    - if (page)
    - folio = page_folio(page);
    + if (!page) {
    + /* We don't need up-to-date accessed/dirty bits. */
    + ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm);
    + arch_check_zapped_pte(vma, ptent);
    + tlb_remove_tlb_entry(tlb, pte, addr);
    + VM_WARN_ON_ONCE(userfaultfd_wp(vma));
    + ksm_might_unmap_zero_page(mm, ptent);
    + return;
    + }

    + folio = page_folio(page);
    if (unlikely(!should_zap_folio(details, folio)))
    return;
    ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm);
    arch_check_zapped_pte(vma, ptent);
    tlb_remove_tlb_entry(tlb, pte, addr);
    zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent);
    - if (unlikely(!page)) {
    - ksm_might_unmap_zero_page(mm, ptent);
    - return;
    - }

    if (!folio_test_anon(folio)) {
    if (pte_dirty(ptent)) {
    --
    2.43.0

    \
     
     \ /
      Last update: 2024-05-27 15:04    [W:5.512 / U:0.524 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site