lkml.org 
[lkml]   [2023]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: What size anonymous folios should we allocate?
From
On 3/27/23 17:30, Ryan Roberts wrote:
> On 27/03/2023 13:41, Vlastimil Babka wrote:
>> On 2/22/23 04:52, Matthew Wilcox wrote:
>>> On Tue, Feb 21, 2023 at 03:05:33PM -0800, Yang Shi wrote:
>>>
>>>>> C. We add a new wrinkle to the LRU handling code. When our scan of the
>>>>> active list examines a folio, we look to see how many of the PTEs
>>>>> mapping the folio have been accessed. If it is fewer than half, and
>>>>> those half are all in either the first or last half of the folio, we
>>>>> split it. The active half stays on the active list and the inactive
>>>>> half is moved to the inactive list.
>>>>
>>>> With contiguous PTE, every PTE still maintains its own access bit (but
>>>> it is implementation defined, some implementations may just set access
>>>> bit once for one PTE in the contiguous region per arm arm IIUC). But
>>>> anyway this is definitely feasible.
>>>
>>> If a CPU doesn't have separate access bits for PTEs, then we should just
>>> not use the contiguous bits. Knowing which parts of the folio are
>>> unused is more important than using the larger TLB entries.
>>
>> Hm but AFAIK the AMD aggregation is transparent, there are no bits. And IIUC
>> the "Hardware Page Aggregation (HPA)" Ryan was talking about elsewhere in
>> the thread, that sounds similar. So I IIUC there will be a larger TLB entry
>> transparently, and then I don't expect the CPU to update individual bits as
>> that would defeat the purpose. So I'd expect it will either set them all to
>> active when forming the larger TLB entry, or set them on a single subpage
>> and leave the rest at whatever state they were. Hm I wonder if the exact
>> behavior is defined anywhere.
>
> For arm64, at least, there are 2 separate mechanisms:
>
> "The Contiguous Bit" (D8.6.1 in the Arm ARM) is a bit in the translation table
> descriptor that SW can set to indicate that a set of adjacent entries are
> contiguous and have same attributes and permissions etc. It is architectural.
> The order of the contiguous range is fixed and depends on the base page size
> that is in use. When in use, HW access and dirty reporting is only done at the
> granularity of the contiguous block.
>
> "HPA" is a micro-architectural feature on some Arm CPUs, which aims to do a
> similar thing, but is transparent to SW. In this case, the dirty and access bits
> remain per-page. But when they differ, this affects the performance of the feature.
>
> Typically HPA can coalesce up to 4 adjacent entries, whereas for a 4KB base page
> at least, the contiguous bit applies to 16 adjacent entries.

Hm if it's 4 entries on arm64 and presumably 8 on AMD, maybe we can only
care about how actively accessed are the individual "subpages" above that
size, to avoid dealing with this uncertainty whether HW tracks them. At such
smallish sizes we shouldn't induce massive overhead?

> I'm hearing that there are workloads where being able to use the contiguous bit
> really does make a difference, so I would like to explore solutions that can
> work when we only have access/dirty at the folio level.

And on the higher orders where we have explicit control via bits, we could
split the explicitly contiguous mappings once in a while to determine if the
sub-folios are still accessed? Although maybe with 16x4kB pages limit it may
still be not worth the trouble?

> Thanks,
> Ryan

\
 
 \ /
  Last update: 2023-03-27 17:50    [W:0.081 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site