lkml.org 
[lkml]   [2022]   [Jul]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC PATCH v4 4/8] hugetlbfs: catch and handle truncate racing with page faults
From
Date
On 2022/7/28 3:00, Mike Kravetz wrote:
> On 07/27/22 17:20, Miaohe Lin wrote:
>> On 2022/7/7 4:23, Mike Kravetz wrote:
>>> Most hugetlb fault handling code checks for faults beyond i_size.
>>> While there are early checks in the code paths, the most difficult
>>> to handle are those discovered after taking the page table lock.
>>> At this point, we have possibly allocated a page and consumed
>>> associated reservations and possibly added the page to the page cache.
>>>
>>> When discovering a fault beyond i_size, be sure to:
>>> - Remove the page from page cache, else it will sit there until the
>>> file is removed.
>>> - Do not restore any reservation for the page consumed. Otherwise
>>> there will be an outstanding reservation for an offset beyond the
>>> end of file.
>>>
>>> The 'truncation' code in remove_inode_hugepages must deal with fault
>>> code potentially removing a page/folio from the cache after the page was
>>> returned by filemap_get_folios and before locking the page. This can be
>>> discovered by a change in folio_mapping() after taking folio lock. In
>>> addition, this code must deal with fault code potentially consuming
>>> and returning reservations. To synchronize this, remove_inode_hugepages
>>> will now take the fault mutex for ALL indices in the hole or truncated
>>> range. In this way, it KNOWS fault code has finished with the page/index
>>> OR fault code will see the updated file size.
>>>
>>> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
>>> ---
>>
>> <snip>
>>
>>> @@ -5606,8 +5610,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>>>
>>> ptl = huge_pte_lock(h, mm, ptep);
>>> size = i_size_read(mapping->host) >> huge_page_shift(h);
>>> - if (idx >= size)
>>> + if (idx >= size) {
>>> + beyond_i_size = true;
>>
>> Thanks for your patch. There is one question:
>>
>> Since races between hugetlb pagefault and truncate is guarded by hugetlb_fault_mutex,
>> do we really need to check it again after taking the page table lock?
>>
>
> Well, the fault mutex can only guard a single hugetlb page. The fault mutex
> is actually an array/table of mutexes hashed by mapping address and file index.
> So, during truncation we take take the mutex for each page as they are
> unmapped and removed. So, the fault mutex only synchronizes operations
> on one specific page. The idea with this patch is to coordinate the fault
> code and truncate code when operating on the same page.
>
> In addition, changing the file size happens early in the truncate process
> before taking any locks/mutexes.

I wonder whether we can somewhat live with it to make code simpler. When changing the file size happens
after checking i_size but before taking the page table lock in hugetlb_fault, the truncate code would
remove the hugetlb page from the page cache for us after hugetlb_fault finishes if we don't roll back
when checking i_size again under the page table lock?

In a word, if hugetlb_fault see a truncated inode, back out early. If not, let truncate code does its
work. So we don't need to complicate the already complicated error path. Or am I miss something?

Thanks.

>

\
 
 \ /
  Last update: 2022-07-28 04:03    [W:1.033 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site