lkml.org 
[lkml]   [2022]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 1/2] mm/migrate_device.c: Copy pte dirty bit to page
On Tue, Aug 16, 2022 at 10:33 AM Alistair Popple <apopple@nvidia.com> wrote:
>
>
> huang ying <huang.ying.caritas@gmail.com> writes:
>
> > Hi, Alistair,
> >
> > On Fri, Aug 12, 2022 at 1:23 PM Alistair Popple <apopple@nvidia.com> wrote:

[snip]

>
> > And I don't know why we use ptep_get_and_clear() to clear PTE if
> > (!anon_exclusive). Why don't we need to flush the TLB?
>
> We do the TLB flush at the end if anything was modified:
>
> /* Only flush the TLB if we actually modified any entries */
> if (unmapped)
> flush_tlb_range(walk->vma, start, end);
>
> Obviously I don't think that will work correctly now given we have to
> read the dirty bits and clear the PTE atomically. I assume it was
> originally written this way for some sort of performance reason.

Thanks for pointing this out. If there were parallel page table
operations such as mprotect() or munmap(), the delayed TLB flushing
mechanism here may have some problem. Please take a look at the
comments of flush_tlb_batched_pending() and TLB flush batching
implementation in try_to_unmap_one(). We may need to flush TLB with
page table lock held or use a mechanism similar to that in
try_to_unmap_one().

Best Regards,
Huang, Ying

\
 
 \ /
  Last update: 2022-08-16 10:36    [W:0.420 / U:1.592 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site