lkml.org 
[lkml]   [2022]   [Jun]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC PATCH 04/26] hugetlb: make huge_pte_lockptr take an explicit shift argument.
From
Date


> On Jul 1, 2022, at 00:23, James Houghton <jthoughton@google.com> wrote:
>
> On Thu, Jun 30, 2022 at 2:35 AM Muchun Song <songmuchun@bytedance.com> wrote:
>>
>> On Wed, Jun 29, 2022 at 03:24:45PM -0700, Mike Kravetz wrote:
>>> On 06/29/22 14:39, James Houghton wrote:
>>>> On Wed, Jun 29, 2022 at 2:04 PM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>>>>>
>>>>> On 06/29/22 14:09, Muchun Song wrote:
>>>>>> On Mon, Jun 27, 2022 at 01:51:53PM -0700, Mike Kravetz wrote:
>>>>>>> On 06/24/22 17:36, James Houghton wrote:
>>>>>>>> This is needed to handle PTL locking with high-granularity mapping. We
>>>>>>>> won't always be using the PMD-level PTL even if we're using the 2M
>>>>>>>> hugepage hstate. It's possible that we're dealing with 4K PTEs, in which
>>>>>>>> case, we need to lock the PTL for the 4K PTE.
>>>>>>>
>>>>>>> I'm not really sure why this would be required.
>>>>>>> Why not use the PMD level lock for 4K PTEs? Seems that would scale better
>>>>>>> with less contention than using the more coarse mm lock.
>>>>>>>
>>>>>>
>>>>>> Your words make me thing of another question unrelated to this patch.
>>>>>> We __know__ that arm64 supports continues PTE HugeTLB. huge_pte_lockptr()
>>>>>> did not consider this case, in this case, those HugeTLB pages are contended
>>>>>> with mm lock. Seems we should optimize this case. Something like:
>>>>>>
>>>>>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>>>>>> index 0d790fa3f297..68a1e071bfc0 100644
>>>>>> --- a/include/linux/hugetlb.h
>>>>>> +++ b/include/linux/hugetlb.h
>>>>>> @@ -893,7 +893,7 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask)
>>>>>> static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
>>>>>> struct mm_struct *mm, pte_t *pte)
>>>>>> {
>>>>>> - if (huge_page_size(h) == PMD_SIZE)
>>>>>> + if (huge_page_size(h) <= PMD_SIZE)
>>>>>> return pmd_lockptr(mm, (pmd_t *) pte);
>>>>>> VM_BUG_ON(huge_page_size(h) == PAGE_SIZE);
>>>>>> return &mm->page_table_lock;
>>>>>>
>>>>>> I did not check if elsewhere needs to be changed as well. Just a primary
>>>>>> thought.
>>>>
>>>> I'm not sure if this works. If hugetlb_pte_size(hpte) is PAGE_SIZE,
>>>> then `hpte.ptep` will be a pte_t, not a pmd_t -- I assume that breaks
>>>> things. So I think, when doing a HugeTLB PT walk down to PAGE_SIZE, we
>>>> need to separately keep track of the location of the PMD so that we
>>>> can use it to get the PMD lock.
>>>
>>> I assume Muchun was talking about changing this in current code (before
>>> your changes) where huge_page_size(h) can not be PAGE_SIZE.
>>>
>>
>> Yes, that's what I meant.
>
> Right -- but I think my point still stands. If `huge_page_size(h)` is
> CONT_PTE_SIZE, then the `pte_t *` passed to `huge_pte_lockptr` will
> *actually* point to a `pte_t` and not a `pmd_t` (I'm pretty sure the

Right. It is a pte in this case.

> distinction is important). So it seems like we need to separately keep
> track of the real pmd_t that is being used in the CONT_PTE_SIZE case

If we want to find pmd_t from pte_t, I think we can introduce a new field
in struct page just like the thread [1] does.

[1] https://lore.kernel.org/lkml/20211110105428.32458-7-zhengqi.arch@bytedance.com/

> (and therefore, when considering HGM, the PAGE_SIZE case).
>
> However, we *can* make this optimization for CONT_PMD_SIZE (maybe this
> is what you originally meant, Muchun?), so instead of
> `huge_page_size(h) == PMD_SIZE`, we could do `huge_page_size(h) >=
> PMD_SIZE && huge_page_size(h) < PUD_SIZE`.

Right. It is a good start to optimize CONT_PMD_SIZE case.

Thanks.

>
>>
>> Thanks.

\
 
 \ /
  Last update: 2022-07-01 05:33    [W:0.114 / U:0.460 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site