lkml.org 
[lkml]   [2020]   [Apr]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 4/4] mm/vmalloc: Hugepage vmalloc mappings
Excerpts from Will Deacon's message of April 15, 2020 8:47 pm:
> Hi Nick,
>
> On Mon, Apr 13, 2020 at 10:53:03PM +1000, Nicholas Piggin wrote:
>> For platforms that define HAVE_ARCH_HUGE_VMAP and support PMD vmap mappings,
>> have vmalloc attempt to allocate PMD-sized pages first, before falling back
>> to small pages. Allocations which use something other than PAGE_KERNEL
>> protections are not permitted to use huge pages yet, not all callers expect
>> this (e.g., module allocations vs strict module rwx).
>>
>> This gives a 6x reduction in dTLB misses for a `git diff` (of linux), from
>> 45600 to 6500 and a 2.2% reduction in cycles on a 2-node POWER9.
>
> I wonder if it's worth extending vmap() to handle higher order pages in
> a similar way? That might be helpful for tracing PMUs such as Arm SPE,
> where the CPU streams tracing data out to a virtually addressed buffer
> (see rb_alloc_aux_page()).

Yeah it becomes pretty trivial to do that with VM_HUGE_PAGES after
this patch, I have something to do it but no callers ready yet, if
you have an easy one we can add it.

>> This can result in more internal fragmentation and memory overhead for a
>> given allocation. It can also cause greater NUMA unbalance on hashdist
>> allocations.
>>
>> There may be other callers that expect small pages under vmalloc but use
>> PAGE_KERNEL, I'm not sure if it's feasible to catch them all. An
>> alternative would be a new function or flag which enables large mappings,
>> and use that in callers.
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> include/linux/vmalloc.h | 2 +
>> mm/vmalloc.c | 135 +++++++++++++++++++++++++++++-----------
>> 2 files changed, 102 insertions(+), 35 deletions(-)
>>
>> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
>> index 291313a7e663..853b82eac192 100644
>> --- a/include/linux/vmalloc.h
>> +++ b/include/linux/vmalloc.h
>> @@ -24,6 +24,7 @@ struct notifier_block; /* in notifier.h */
>> #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */
>> #define VM_NO_GUARD 0x00000040 /* don't add guard page */
>> #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */
>> +#define VM_HUGE_PAGES 0x00000100 /* may use huge pages */
>
> Please can you add a check for this in the arm64 change_memory_common()
> code? Other architectures might need something similar, but we need to
> forbid changing memory attributes for portions of the huge page.

Yeah good idea, I can look about adding some more checks.

>
> In general, I'm a bit wary of software table walkers tripping over this.
> For example, I don't think apply_to_existing_page_range() can handle
> huge mappings at all, but the one user (KASAN) only ever uses page mappings
> so it's ok there.

Right, I have something to warn for apply to page range (and looking
at adding support for bigger pages). It doesn't even have a test and
warn at the moment which isn't good practice IMO so we should add one
even without huge vmap.

>
>> @@ -2325,9 +2356,11 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
>> if (unlikely(!size))
>> return NULL;
>>
>> - if (flags & VM_IOREMAP)
>> - align = 1ul << clamp_t(int, get_count_order_long(size),
>> - PAGE_SHIFT, IOREMAP_MAX_ORDER);
>> + if (flags & VM_IOREMAP) {
>> + align = max(align,
>> + 1ul << clamp_t(int, get_count_order_long(size),
>> + PAGE_SHIFT, IOREMAP_MAX_ORDER));
>> + }
>
>
> I don't follow this part. Please could you explain why you're potentially
> aligning above IOREMAP_MAX_ORDER? It doesn't seem to follow from the rest
> of the patch.

Trying to remember. If the caller asks for a particular alignment we
shouldn't reduce it. Should put it in another patch.

Thanks,
Nick

\
 
 \ /
  Last update: 2020-04-16 04:39    [W:0.224 / U:0.372 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site