lkml.org 
[lkml]   [2022]   [May]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 0/6] Drain remote per-cpu directly v3
    On Thu, May 26, 2022 at 01:19:38PM -0400, Qian Cai wrote:
    > On Thu, May 12, 2022 at 09:50:37AM +0100, Mel Gorman wrote:
    > > Changelog since v2
    > > o More conversions from page->lru to page->[pcp_list|buddy_list]
    > > o Additional test results in changelogs
    > >
    > > Changelog since v1
    > > o Fix unsafe RT locking scheme
    > > o Use spin_trylock on UP PREEMPT_RT
    > >
    > > This series has the same intent as Nicolas' series "mm/page_alloc: Remote
    > > per-cpu lists drain support" -- avoid interference of a high priority
    > > task due to a workqueue item draining per-cpu page lists. While many
    > > workloads can tolerate a brief interruption, it may be cause a real-time
    > > task runnning on a NOHZ_FULL CPU to miss a deadline and at minimum,
    > > the draining in non-deterministic.
    > >
    > > Currently an IRQ-safe local_lock protects the page allocator per-cpu lists.
    > > The local_lock on its own prevents migration and the IRQ disabling protects
    > > from corruption due to an interrupt arriving while a page allocation is
    > > in progress. The locking is inherently unsafe for remote access unless
    > > the CPU is hot-removed.
    > >
    > > This series adjusts the locking. A spinlock is added to struct
    > > per_cpu_pages to protect the list contents while local_lock_irq continues
    > > to prevent migration and IRQ reentry. This allows a remote CPU to safely
    > > drain a remote per-cpu list.
    > >
    > > This series is a partial series. Follow-on work should allow the
    > > local_irq_save to be converted to a local_irq to avoid IRQs being
    > > disabled/enabled in most cases. Consequently, there are some TODO comments
    > > highlighting the places that would change if local_irq was used. However,
    > > there are enough corner cases that it deserves a series on its own
    > > separated by one kernel release and the priority right now is to avoid
    > > interference of high priority tasks.
    > >
    > > Patch 1 is a cosmetic patch to clarify when page->lru is storing buddy pages
    > > and when it is storing per-cpu pages.
    > >
    > > Patch 2 shrinks per_cpu_pages to make room for a spin lock. Strictly speaking
    > > this is not necessary but it avoids per_cpu_pages consuming another
    > > cache line.
    > >
    > > Patch 3 is a preparation patch to avoid code duplication.
    > >
    > > Patch 4 is a simple micro-optimisation that improves code flow necessary for
    > > a later patch to avoid code duplication.
    > >
    > > Patch 5 uses a spin_lock to protect the per_cpu_pages contents while still
    > > relying on local_lock to prevent migration, stabilise the pcp
    > > lookup and prevent IRQ reentrancy.
    > >
    > > Patch 6 remote drains per-cpu pages directly instead of using a workqueue.
    >
    > Mel, we saw spontanous "mm_percpu_wq" crash on today's linux-next tree
    > while running CPU offlining/onlining, and wondering if you have any
    > thoughts?
    >

    Do you think it's related to the series and if so why? From the warning,
    it's not obvious to me why it would be given that it's a warning about a
    task not being inactive when it is expected to be.

    --
    Mel Gorman
    SUSE Labs

    \
     
     \ /
      Last update: 2022-05-27 10:41    [W:2.081 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site