lkml.org 
[lkml]   [2021]   [Aug]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 07/16] memfd: memfd_create(name, MFD_HUGEPAGE) for shmem huge pages
On Wed, 4 Aug 2021, Kirill A. Shutemov wrote:
> On Fri, Jul 30, 2021 at 12:45:49AM -0700, Hugh Dickins wrote:
> > Commit 749df87bd7be ("mm/shmem: add hugetlbfs support to memfd_create()")
> > in 4.14 added the MFD_HUGETLB flag to memfd_create(), to use hugetlbfs
> > pages instead of tmpfs pages: now add the MFD_HUGEPAGE flag, to use tmpfs
> > Transparent Huge Pages when they can be allocated (flag named to follow
> > the precedent of madvise's MADV_HUGEPAGE for THPs).
>
> I don't like the interface. THP supposed to be transparent, not yet another
> hugetlbs.

THP is transparent in the sense that it builds hugepages from the
normal page pool, when it can (or not when it cannot), rather than
promising hugepages from a separate pre-reserved hugetlbfs pool.

Not transparent in the sense that it cannot be limited or guided.

>
> > /sys/kernel/mm/transparent_hugepage/shmem_enabled "always" or "force"
> > already made this possible: but that is much too blunt an instrument,
> > affecting all the very different kinds of files on the internal shmem
> > mount, and was intended just for ease of testing hugepage loads.
>
> I wounder if your tried "always" in production? What breaks? Maybe we can
> make it work with a heuristic? This would speed up adoption.

We have not tried /sys/kernel/mm/transparent_hugepage/shmem_enabled
"always" in production. Is that an experiment I want to recommend for
production? No, I don't think so! Why should we?

I am not looking to "speed up adoption" of huge tmpfs everywhere:
let those who find it useful use it, there is no need for it to be
used everywhere.

We have had this disagreement before: you were aiming for tmpfs on /tmp
huge=always, I didn't see the need for that; but we have always agreed
that it should not be broken there, and the better it works the better -
you did the unused_huge_shrink stuff in particular to meet such cases.

>
> If a tunable needed, I would rather go with fadvise(). It would operate on
> a couple of bits per struct file and they get translated into VM_HUGEPAGE
> and VM_NOHUGEPAGE on mmap().
>
> Later if needed fadvise() implementation may be extended to track
> requested ranges. But initially it can be simple.

Let me shift that to the 08/16 (fcntl) response, and here answer:

> Hm, But why is the MFD_* needed if the fcntl() can do the same.

You're right, MFD_HUGEPAGE (and MFD_MEM_LOCK) are not strictly
needed if there's an fcntl() or fadvise() which can do that too.

But MFD_HUGEPAGE is the option which was first asked for, and is
the most popular usage internally - I did the fcntl at the same time,
and it has been found useful, but MFD_HUGEPAGE was the priority
(largely because fiddling with shmem_enabled interferes with
everyone's different usages, whereas huge=always on a mount
can be deployed selectively).

And it makes good sense for memfd_create() to offer MFD_HUGEPAGE,
as it is already offering MFD_HUGETLB: when we document MFD_HUGEPAGE
next to MFD_HUGETLB in the memfd_create(2) man page, that will help
developers to make a good choice.

(You said MFD_*, so I take it that you're thinking of MFD_MEM_LOCK
too: MFD_MEM_LOCK is something I added when building this series,
when I realized that it became possible once size change permitted.
Nobody here is using it yet, I don't mind if it's dropped; but it's
natural to propose it as part of the series, and it can be justified
as offering the memlock option which MFD_HUGETLB already bundles in.)

Hugh

\
 
 \ /
  Last update: 2021-08-06 05:34    [W:0.058 / U:4.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site