lkml.org 
[lkml]   [2022]   [May]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRe: [PATCH bpf-next 5/5] bpf: use module_alloc_huge for bpf_prog_pack
    Date
    On Wed, 2022-05-18 at 06:34 +0000, Song Liu wrote:
    > > > I am not quite sure the exact work needed here. Rick, would you
    > > > have
    > > > time to enable VM_FLUSH_RESET_PERMS for huge pages? Given the
    > > > merge
    > > > window is coming soon, I guess we need current work around in
    > > > 5.19.
    > >
    > > I would have hard time squeezing that in now. The vmalloc part is
    > > easy,
    > > I think I already posted a diff. But first hibernate needs to be
    > > changed to not care about direct map page sizes.
    >
    > I guess I missed the diff, could you please send a link to it?


    https://lore.kernel.org/lkml/5bd16e2c06a2df357400556c6ae01bb5d3c5c32a.camel@intel.com/

    The remaining problem is that hibernate may encounter NP pages when
    saving memory to disk. It resets them with CPA calls 4k at a time. So
    if a page is NP, hibernate needs it to be already be 4k or it might
    need to split. I think hibernate should just utilize a different
    mapping to get at the page when it encounters this rare scenario. In
    that diff I put some locking so that hibernate couldn't race with a
    huge NP page, but then I thought we should just change hibernate.

    >
    > >
    > > >
    > > > >
    > > > > > Signed-off-by: Song Liu <song@kernel.org>
    > > > > > ---
    > > > > > kernel/bpf/core.c | 12 +++++++-----
    > > > > > 1 file changed, 7 insertions(+), 5 deletions(-)
    > > > > >
    > > > > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
    > > > > > index cacd8684c3c4..b64d91fcb0ba 100644
    > > > > > --- a/kernel/bpf/core.c
    > > > > > +++ b/kernel/bpf/core.c
    > > > > > @@ -857,7 +857,7 @@ static size_t
    > > > > > select_bpf_prog_pack_size(void)
    > > > > > void *ptr;
    > > > > >
    > > > > > size = BPF_HPAGE_SIZE * num_online_nodes();
    > > > > > - ptr = module_alloc(size);
    > > > > > + ptr = module_alloc_huge(size);
    > > > >
    > > > > This select_bpf_prog_pack_size() function always seemed weird -
    > > > > doing a
    > > > > big allocation and then immediately freeing. Can't it check a
    > > > > config
    > > > > for vmalloc huge page support?
    > > >
    > > > Yes, it is weird. Checking a config is not enough here. We also
    > > > need
    > > > to
    > > > check vmap_allow_huge, which is controlled by boot parameter
    > > > nohugeiomap.
    > > > I haven’t got a better solution for this.
    > >
    > > It's too weird. We should expose whats needed in vmalloc.
    > > huge_vmalloc_supported() or something.
    >
    > Yeah, this should work. I will get something like this in the next
    > version.
    >
    > >
    > > I'm also not clear why we wouldn't want to use the prog pack
    > > allocator
    > > even if vmalloc huge pages was disabled. Doesn't it improve
    > > performance
    > > even with small page sizes, per your benchmarks? What is the
    > > downside
    > > to just always using it?
    >
    > With current version, when huge page is disabled, the prog pack
    > allocator
    > will use 4kB pages for each pack. We still get about 0.5% performance
    > improvement with 4kB prog packs.

    Oh, I thought you were comparing a 2MB sized, small page mapped
    allocation to a 2MB sized, huge page mapped allocation.

    It looks like the logic is to free a pack if it is empty, so then for
    smaller packs you are more likely to let the pages go back to the page
    allocator. Then future allocations would break more pages.

    So I think that is not a fully apples to apples test of huge mapping
    benefits. I'd be surprised if there really was no huge mapping benefit,
    since its been seen with core kernel text. Did you notice if the direct
    map breakage was different between the tests?

    \
     
     \ /
      Last update: 2022-05-18 18:54    [W:4.619 / U:0.276 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site