lkml.org 
[lkml]   [2024]   [Feb]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 1/1] mm/madvise: enhance lazyfreeing with mTHP in madvise_free
From
On 26.02.24 13:57, Ryan Roberts wrote:
> On 26/02/2024 08:35, Lance Yang wrote:
>> Hey Fengwei,
>>
>> Thanks for taking time to review!
>>
>>> On Mon, Feb 26, 2024 at 10:38 AM Yin Fengwei <fengwei.yin@intel.com> wrote:
>>>> On Sun, Feb 25, 2024 at 8:32 PM Lance Yang <ioworker0@gmail.com> wrote:
>> [...]
>>>> --- a/mm/madvise.c
>>>> +++ b/mm/madvise.c
>>>> @@ -676,11 +676,43 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>>>> */
>>>> if (folio_test_large(folio)) {
>>>> int err;
>>>> + unsigned long next_addr, align;
>>>>
>>>> - if (folio_estimated_sharers(folio) != 1)
>>>> - break;
>>>> - if (!folio_trylock(folio))
>>>> - break;
>>>> + if (folio_estimated_sharers(folio) != 1 ||
>>>> + !folio_trylock(folio))
>>>> + goto skip_large_folio;
>>>> +
>>>> + align = folio_nr_pages(folio) * PAGE_SIZE;
>>>> + next_addr = ALIGN_DOWN(addr + align, align);
>>> There is a possible corner case:
>>> If there is a cow folio associated with this folio and the cow folio
>>> has smaller size than this folio for whatever reason, this change can't
>>> handle it correctly.
>>
>> Thanks for pointing that out; it's very helpful to me!
>> I made some changes. Could you please check if this corner case is now resolved?
>>
>> As a diff against this patch.
>>
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index bcbf56595a2e..c7aacc9f9536 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -686,10 +686,12 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>> next_addr = ALIGN_DOWN(addr + align, align);
>>
>> /*
>> - * If we mark only the subpages as lazyfree,
>> - * split the large folio.
>> + * If we mark only the subpages as lazyfree, or
>> + * if there is a cow folio associated with this folio,
>> + * then split the large folio.
>> */
>> - if (next_addr > end || next_addr - addr != align)
>> + if (next_addr > end || next_addr - addr != align ||
>> + folio_total_mapcount(folio) != folio_nr_pages(folio))
>
> I still don't think this is correct. I think you were previously assuming that
> if you see a page from a large folio then the whole large folio should be
> contiguously mapped? This new check doesn't validate that assumption reliably;
> you need to iterate through every pte to generate a batch, like David does in
> folio_pte_batch() for this to be safe.
>
> An example of when this check is insufficient; let's say you have a 4 page anon
> folio mapped contiguously in a process (total_mapcount=4). The process is forked
> (total_mapcount=8). Then each process munmaps the second 2 pages
> (total_mapcount=4). In place of the munmapped 2 pages, 2 new pages are mapped.
> Then call madvise. It's probably even easier to trigger for file-backed memory
> (I think this code path is used for both file and anon?)

What would work here is using folio_pte_batch() to get how many PTEs are
mapped *here*, then comparing the the batch size to folio_nr_pages(). If
both match, we are mapping all subpages.

--
Cheers,

David / dhildenb


\
 
 \ /
  Last update: 2024-05-27 15:22    [W:0.079 / U:2.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site