lkml.org 
[lkml]   [2021]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] hugetlb: select PREEMPT_COUNT if HUGETLB_PAGE for in_atomic use
On Wed, Mar 10, 2021 at 06:13:21PM -0800, Mike Kravetz wrote:
> put_page does not correctly handle all calling contexts for hugetlb
> pages. This was recently discussed in the threads [1] and [2].
>
> free_huge_page is the routine called for the final put_page of huegtlb
> pages. Since at least the beginning of git history, free_huge_page has
> acquired the hugetlb_lock to move the page to a free list and possibly
> perform other processing. When this code was originally written, the
> hugetlb_lock should have been made irq safe.
>
> For many years, nobody noticed this situation until lockdep code caught
> free_huge_page being called from irq context. By this time, another
> lock (hugetlb subpool) was also taken in the free_huge_page path.

AFAICT there's no actual problem with making spool->lock IRQ-safe too.

> In addition, hugetlb cgroup code had been added which could hold
> hugetlb_lock for a considerable period of time.

cgroups, always bloody cgroups. The scheduler (and a fair number of
other places) get to deal with cgroups with IRQs disabled, so I'm sure
this can too.

> Because of this, commit
> c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in non-task
> context") was added to address the issue of free_huge_page being called
> from irq context. That commit hands off free_huge_page processing to a
> workqueue if !in_task.
>
> The !in_task check handles the case of being called from irq context.
> However, it does not take into account the case when called with irqs
> disabled as in [1].
>
> To complicate matters, functionality has been added to hugetlb
> such that free_huge_page may block/sleep in certain situations. The
> hugetlb_lock is of course dropped before potentially blocking.

AFAICT that's because CMA, right? That's only hstate_is_gigantic() and
free_gigantic_page() that has that particular trainwreck.

So you could move the workqueue there, and leave all the other hugetlb
sizes unaffected. Afaict if you limit the workqueue crud to
cma_clear_bitmap(), you don't get your..

> One way to handle all calling contexts is to have free_huge_page always
> send pages to the workqueue for processing. This idea was briefly
> discussed here [3], but has some undesirable side effects.

... user visible side effects either.

> Ideally, the hugetlb_lock should have been irq safe from the beginning
> and any code added to the free_huge_page path should have taken this
> into account. However, this has not happened. The code today does have
> the ability to hand off requests to a workqueue. It does this for calls
> from irq context. Changing the check in the code from !in_task to
> in_atomic would handle the situations when called with irqs disabled.
> However, it does not not handle the case when called with a spinlock
> held. This is needed because the code could block/sleep.

I'll argue the current workqueue thing is in the wrong place to begin
with.

So how about you make hugetlb_lock and spool->lock IRQ-safe, move thw
workqueue thingy into cma_release(), and then worry about optimizing the
cgroup crap?

Correctness first, performance second. Also, if you really care about
performance, not using cgroups is a very good option anyway.

\
 
 \ /
  Last update: 2021-03-11 10:51    [W:1.817 / U:0.492 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site