lkml.org 
[lkml]   [2024]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v1 9/9] mm/memory: optimize unmap/zap with PTE-mapped THP
From
>>>> -        folio_remove_rmap_pte(folio, page, vma);
>>>> +        folio_remove_rmap_ptes(folio, page, nr, vma);
>>>> +
>>>> +        /* Only sanity-check the first page in a batch. */
>>>>           if (unlikely(page_mapcount(page) < 0))
>>>>               print_bad_pte(vma, addr, ptent, page);
>>>
>>> Is there a case for either removing this all together or moving it into
>>> folio_remove_rmap_ptes()? It seems odd to only check some pages.
>>>
>>
>> I really wanted to avoid another nasty loop here.
>>
>> In my thinking, for 4k folios, or when zapping subpages of large folios, we
>> still perform the exact same checks. Only when batching we don't. So if there is
>> some problem, there are ways to get it triggered. And these problems are barely
>> ever seen.
>>
>> folio_remove_rmap_ptes() feels like the better place -- especially because the
>> delayed-rmap handling is effectively unchecked. But in there, we cannot
>> "print_bad_pte()".
>>
>> [background: if we had a total mapcount -- iow cheap folio_mapcount(), I'd check
>> here that the total mapcount does not underflow, instead of checking per-subpage]
>
> All good points... perhaps extend the comment to describe how this could be
> solved in future with cheap total_mapcount()? Or in the commit log if you prefer?

I'll add more meat to the cover letter, thanks!

--
Cheers,

David / dhildenb


\
 
 \ /
  Last update: 2024-05-27 14:41    [W:0.224 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site