lkml.org 
[lkml]   [2022]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH v2 7/9] mm: Add the break COW PTE handler
On Tue, Sep 27, 2022 at 06:15:34PM +0000, Nadav Amit wrote:
> On Sep 27, 2022, at 9:29 AM, Chih-En Lin <shiyn.lin@gmail.com> wrote:
>
> > To handle the COW PTE with write fault, introduce the helper function
> > handle_cow_pte(). The function provides two behaviors. One is breaking
> > COW by decreasing the refcount, pgables_bytes, and RSS. Another is
> > copying all the information in the shared PTE table by using
> > copy_pte_page() with a wrapper.
> >
> > Also, add the wrapper functions to help us find out the COWed or
> > COW-available PTE table.
> >
>
> [ snip ]
>
> > +static inline int copy_cow_pte_range(struct vm_area_struct *vma,
> > + pmd_t *dst_pmd, pmd_t *src_pmd,
> > + unsigned long start, unsigned long end)
> > +{
> > + struct mm_struct *mm = vma->vm_mm;
> > + struct mmu_notifier_range range;
> > + int ret;
> > + bool is_cow;
> > +
> > + is_cow = is_cow_mapping(vma->vm_flags);
> > + if (is_cow) {
> > + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
> > + 0, vma, mm, start, end);
> > + mmu_notifier_invalidate_range_start(&range);
> > + mmap_assert_write_locked(mm);
> > + raw_write_seqcount_begin(&mm->write_protect_seq);
> > + }
> > +
> > + ret = copy_pte_range(vma, vma, dst_pmd, src_pmd, start, end);
> > +
> > + if (is_cow) {
> > + raw_write_seqcount_end(&mm->write_protect_seq);
> > + mmu_notifier_invalidate_range_end(&range);
>
> Usually, I would expect mmu-notifiers and TLB flushes to be initiated at the
> same point in the code. Presumably you changed protection, so you do need a
> TLB flush, right? Is it done elsewhere?

You're right.
I will add TLB flushes here.
Thanks.

> > + }
> > +
> > + return ret;
> > +}
> > +
> > +/*
> > + * Break COW PTE, two state here:
> > + * - After fork : [parent, rss=1, ref=2, write=NO , owner=parent]
> > + * to [parent, rss=1, ref=1, write=YES, owner=NULL ]
> > + * COW PTE become [ref=1, write=NO , owner=NULL ]
> > + * [child , rss=0, ref=2, write=NO , owner=parent]
> > + * to [child , rss=1, ref=1, write=YES, owner=NULL ]
> > + * COW PTE become [ref=1, write=NO , owner=parent]
> > + * NOTE
> > + * - Copy the COW PTE to new PTE.
> > + * - Clear the owner of COW PTE and set PMD entry writable when it is owner.
> > + * - Increase RSS if it is not owner.
> > + */
> > +static int break_cow_pte(struct vm_area_struct *vma, pmd_t *pmd,
> > + unsigned long addr)
> > +{
> > + struct mm_struct *mm = vma->vm_mm;
> > + unsigned long pte_start, pte_end;
> > + unsigned long start, end;
> > + struct vm_area_struct *prev = vma->vm_prev;
> > + struct vm_area_struct *next = vma->vm_next;
> > + pmd_t cowed_entry = *pmd;
> > +
> > + if (cow_pte_count(&cowed_entry) == 1) {
> > + cow_pte_fallback(vma, pmd, addr);
> > + return 1;
> > + }
> > +
> > + pte_start = start = addr & PMD_MASK;
> > + pte_end = end = (addr + PMD_SIZE) & PMD_MASK;
> > +
> > + pmd_clear(pmd);
> > + /*
> > + * If the vma does not cover the entire address range of the PTE table,
> > + * it should check the previous and next.
> > + */
> > + if (start < vma->vm_start && prev) {
> > + /* The part of address range is covered by previous. */
> > + if (start < prev->vm_end)
> > + copy_cow_pte_range(prev, pmd, &cowed_entry,
> > + start, prev->vm_end);
> > + start = vma->vm_start;
> > + }
> > + if (end > vma->vm_end && next) {
> > + /* The part of address range is covered by next. */
> > + if (end > next->vm_start)
> > + copy_cow_pte_range(next, pmd, &cowed_entry,
> > + next->vm_start, end);
> > + end = vma->vm_end;
> > + }
> > + if (copy_cow_pte_range(vma, pmd, &cowed_entry, start, end))
> > + return -ENOMEM;
> > +
> > + /*
> > + * Here, it is the owner, so clear the ownership. To keep RSS state and
> > + * page table bytes correct, it needs to decrease them.
> > + * Also, handle the address range issue here.
> > + */
> > + if (cow_pte_owner_is_same(&cowed_entry, pmd)) {
> > + set_cow_pte_owner(&cowed_entry, NULL);
>
> Presumably there is some assumption on atomicity here. Otherwise, two
> threads can run the following code, which is wrong, no? Yet, I do not see
> anything that provides such atomicity.

I may have multiple process access here. But for the thread, I assume
that they need to hold the mmap_lock. Maybe I need to add the assert
here too.

>
> > + if (pte_start < vma->vm_start && prev &&
> > + pte_start < prev->vm_end)
> > + cow_pte_rss(mm, vma->vm_prev, pmd,
> > + pte_start, prev->vm_end, false /* dec */);
> > + if (pte_end > vma->vm_end && next &&
> > + pte_end > next->vm_start)
> > + cow_pte_rss(mm, vma->vm_next, pmd,
> > + next->vm_start, pte_end, false /* dec */);
> > + cow_pte_rss(mm, vma, pmd, start, end, false /* dec */);
> > + mm_dec_nr_ptes(mm);
> > + }
> > +
> > + /* Already handled it, don't reuse cowed table. */
> > + pmd_put_pte(vma, &cowed_entry, addr, false);
> > +
> > + VM_BUG_ON(cow_pte_count(pmd) != 1);
>
> Don’t use VM_BUG_ON().

Sure. I will change it to VM_WARN_ON().

Thanks,
Chih-En Lin

\
 
 \ /
  Last update: 2022-09-27 21:24    [W:0.068 / U:1.000 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site