Messages in this thread | | | Date | Wed, 30 Nov 2022 17:31:52 +0100 | Subject | Re: [PATCH 03/10] mm/hugetlb: Document huge_pte_offset usage | From | David Hildenbrand <> |
| |
On 30.11.22 17:25, Peter Xu wrote: > On Wed, Nov 30, 2022 at 05:11:36PM +0100, David Hildenbrand wrote: >> On 30.11.22 17:09, Peter Xu wrote: >>> On Wed, Nov 30, 2022 at 11:24:34AM +0100, David Hildenbrand wrote: >>>> On 29.11.22 20:35, Peter Xu wrote: >>>>> huge_pte_offset() is potentially a pgtable walker, looking up pte_t* for a >>>>> hugetlb address. >>>>> >>>>> Normally, it's always safe to walk a generic pgtable as long as we're with >>>>> the mmap lock held for either read or write, because that guarantees the >>>>> pgtable pages will always be valid during the process. >>>>> >>>>> But it's not true for hugetlbfs, especially shared: hugetlbfs can have its >>>>> pgtable freed by pmd unsharing, it means that even with mmap lock held for >>>>> current mm, the PMD pgtable page can still go away from under us if pmd >>>>> unsharing is possible during the walk. >>>>> >>>>> So we have two ways to make it safe even for a shared mapping: >>>>> >>>>> (1) If we're with the hugetlb vma lock held for either read/write, it's >>>>> okay because pmd unshare cannot happen at all. >>>>> >>>>> (2) If we're with the i_mmap_rwsem lock held for either read/write, it's >>>>> okay because even if pmd unshare can happen, the pgtable page cannot >>>>> be freed from under us. >>>>> >>>>> Document it. >>>>> >>>>> Signed-off-by: Peter Xu <peterx@redhat.com> >>>>> --- >>>>> include/linux/hugetlb.h | 32 ++++++++++++++++++++++++++++++++ >>>>> 1 file changed, 32 insertions(+) >>>>> >>>>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h >>>>> index 551834cd5299..81efd9b9baa2 100644 >>>>> --- a/include/linux/hugetlb.h >>>>> +++ b/include/linux/hugetlb.h >>>>> @@ -192,6 +192,38 @@ extern struct list_head huge_boot_pages; >>>>> pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, >>>>> unsigned long addr, unsigned long sz); >>>>> +/* >>>>> + * huge_pte_offset(): Walk the hugetlb pgtable until the last level PTE. >>>>> + * Returns the pte_t* if found, or NULL if the address is not mapped. >>>>> + * >>>>> + * Since this function will walk all the pgtable pages (including not only >>>>> + * high-level pgtable page, but also PUD entry that can be unshared >>>>> + * concurrently for VM_SHARED), the caller of this function should be >>>>> + * responsible of its thread safety. One can follow this rule: >>>>> + * >>>>> + * (1) For private mappings: pmd unsharing is not possible, so it'll >>>>> + * always be safe if we're with the mmap sem for either read or write. >>>>> + * This is normally always the case, IOW we don't need to do anything >>>>> + * special. >>>> >>>> Maybe worth mentioning that hugetlb_vma_lock_read() and friends already >>>> optimize for private mappings, to not take the VMA lock if not required. >>> >>> Yes we can. I assume this is not super urgent so I'll hold a while to see >>> whether there's anything else that needs amending for the documents. >>> >>> Btw, even with hugetlb_vma_lock_read() checking SHARED for a private only >>> code path it's still better to not take the lock at all, because that still >>> contains a function jump which will be unnecesary. >> >> IMHO it makes coding a lot more consistent and less error-prone when not >> care about whether to the the lock or not (as an optimization) and just >> having this handled "automatically". >> >> Optimizing a jump out would rather smell like a micro-optimization. > > Or we can move the lock helpers into the headers, too.
Ah, yes.
-- Thanks,
David / dhildenb
| |