lkml.org 
[lkml]   [2019]   [Apr]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 17/28] userfaultfd: wp: support swap and page migration
On Wed, Mar 20, 2019 at 10:06:31AM +0800, Peter Xu wrote:
> For either swap and page migration, we all use the bit 2 of the entry to
> identify whether this entry is uffd write-protected. It plays a similar
> role as the existing soft dirty bit in swap entries but only for keeping
> the uffd-wp tracking for a specific PTE/PMD.
>
> Something special here is that when we want to recover the uffd-wp bit
> from a swap/migration entry to the PTE bit we'll also need to take care
> of the _PAGE_RW bit and make sure it's cleared, otherwise even with the
> _PAGE_UFFD_WP bit we can't trap it at all.
>
> Note that this patch removed two lines from "userfaultfd: wp: hook
> userfault handler to write protection fault" where we try to remove the
> VM_FAULT_WRITE from vmf->flags when uffd-wp is set for the VMA. This
> patch will still keep the write flag there.
>
> Reviewed-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Some missing thing see below.

[...]

> diff --git a/mm/memory.c b/mm/memory.c
> index 6405d56debee..c3d57fa890f2 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -736,6 +736,8 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> pte = swp_entry_to_pte(entry);
> if (pte_swp_soft_dirty(*src_pte))
> pte = pte_swp_mksoft_dirty(pte);
> + if (pte_swp_uffd_wp(*src_pte))
> + pte = pte_swp_mkuffd_wp(pte);
> set_pte_at(src_mm, addr, src_pte, pte);
> }
> } else if (is_device_private_entry(entry)) {

You need to handle the is_device_private_entry() as the migration case
too.



> @@ -2825,6 +2827,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> flush_icache_page(vma, page);
> if (pte_swp_soft_dirty(vmf->orig_pte))
> pte = pte_mksoft_dirty(pte);
> + if (pte_swp_uffd_wp(vmf->orig_pte)) {
> + pte = pte_mkuffd_wp(pte);
> + pte = pte_wrprotect(pte);
> + }
> set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
> arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
> vmf->orig_pte = pte;
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 181f5d2718a9..72cde187d4a1 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -241,6 +241,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
> entry = pte_to_swp_entry(*pvmw.pte);
> if (is_write_migration_entry(entry))
> pte = maybe_mkwrite(pte, vma);
> + else if (pte_swp_uffd_wp(*pvmw.pte))
> + pte = pte_mkuffd_wp(pte);
>
> if (unlikely(is_zone_device_page(new))) {
> if (is_device_private_page(new)) {

You need to handle is_device_private_page() case ie mark its swap
as uffd_wp

> @@ -2301,6 +2303,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> swp_pte = swp_entry_to_pte(entry);
> if (pte_soft_dirty(pte))
> swp_pte = pte_swp_mksoft_dirty(swp_pte);
> + if (pte_uffd_wp(pte))
> + swp_pte = pte_swp_mkuffd_wp(swp_pte);
> set_pte_at(mm, addr, ptep, swp_pte);
>
> /*
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 855dddb07ff2..96c0f521099d 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -196,6 +196,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> newpte = swp_entry_to_pte(entry);
> if (pte_swp_soft_dirty(oldpte))
> newpte = pte_swp_mksoft_dirty(newpte);
> + if (pte_swp_uffd_wp(oldpte))
> + newpte = pte_swp_mkuffd_wp(newpte);
> set_pte_at(mm, addr, pte, newpte);
>
> pages++;

Need to handle is_write_device_private_entry() case just below
that chunk.

Cheers,
Jérôme

\
 
 \ /
  Last update: 2019-04-18 23:00    [W:0.323 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site