lkml.org 
[lkml]   [2023]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Subject[PATCHSET v3 wq/for-6.5] workqueue: Implement automatic CPU intensive detection and add monitoring
    Date
    Hello,

    v3: * Switched to hooking into scheduler_tick() instead of scheduling paths
    as suggested by Peter. It's less gnarly and works well in general;
    however, as the mechanism is now sampling based, there can be
    contrived cases where detection can be temporarily avoided. Also, it
    wouldn't work on nohz_full CPUs. Neither is critical especially given
    that common offenders are likely to be weeded out with the debug
    reporting over time.

    * As the above means that workqueue is no longer obersving all
    scheduling events, it can't track the CPU time being consumed by the
    workers on its own and thus can't use global clocks (e.g. jiffies).
    The CPU time consumption tracking is still done with
    p->se.sum_exec_runtime.

    * The mechanism was incorrectly monitoring the entire CPU time a given
    work item has consumed instead of each stretch without intervening
    sleeps. Fixed.

    * CPU time monitoring is now tick sampling based. The previous
    p->se.sum_exec_runtime implementation was missing CPU time consumed
    between the last scheduling event the work item finished and the
    completion, so, e.g., work items that never schedule would always be
    accounted as zero CPU time. While the sampling based implementation
    isn't very accurate, this is good enough for getting the overall
    picture of workqueues that consume a lot of CPU cycles.

    * Patches reordered so that the visibility one can be applied first.
    Documentation improved.

    v2: * Lai pointed out that !SM_NONE cases should also be notified. 0001 and
    0004 are updated accordingly.

    * PeterZ suggested reporting on work items that trigger the auto CPU
    intensive mechanism. 0006 adds reporting of work functions that
    trigger the mechanism repeatedly with exponential backoff.

    Hello,

    To reduce the number of concurrent worker threads, workqueue holds back
    starting per-cpu work items while the previous work item stays in the
    RUNNING state. As such a per-cpu work item which consumes a lot of CPU
    cycles, even if it has cond_resched()'s in the right places, can stall other
    per-cpu work items.

    To support per-cpu work items that may occupy the CPU for a substantial
    period of time, workqueue has WQ_CPU_INTENSIVE flag which exempts work items
    issued through the marked workqueue from concurrency management - they're
    started immediately and don't block other work items. While this works, it's
    error-prone in that a workqueue user can easily forget to set the flag or
    set it unnecessarily. Furthermore, the impacts of the wrong flag setting can
    be rather indirect and challenging to root-cause.

    This patchset makes workqueue auto-detect CPU intensive work items based on
    CPU consumption. If a work item consumes more than the threshold (10ms by
    default) of CPU time, it's automatically marked as CPU intensive when it
    gets scheduled out which unblocks starting of pending per-cpu work items.

    The mechanism isn't foolproof in that the detection delays can add up if
    many CPU-hogging work items are queued at the same time. However, in such
    situations, the bigger problem likely is the CPU being saturated with
    per-cpu work items and the solution would be making them UNBOUND. Future
    changes will make UNBOUND workqueues more attractive by improving their
    locality behaviors and configurability. We might eventually remove the
    explicit WQ_CPU_INTENSIVE flag.

    While at it, add statistics and a monitoring script. Lack of visibility has
    always been a bit of pain point when debugging workqueue related issues and
    with this change and more drastic ones planned for workqueue, this is a good
    time to address the shortcoming.

    This patchset was born out of the discussion in the following thread:

    https://lkml.kernel.org/r/CAHk-=wgE9kORADrDJ4nEsHHLirqPCZ1tGaEPAZejHdZ03qCOGg@mail.gmail.com

    and contains the following patches:

    0001-workqueue-Add-pwq-stats-and-a-monitoring-script.patch
    0002-workqueue-Re-order-struct-worker-fields.patch
    0003-workqueue-Move-worker_set-clr_flags-upwards.patch
    0004-workqueue-Improve-locking-rule-description-for-worke.patch
    0005-workqueue-Automatically-mark-CPU-hogging-work-items-.patch
    0006-workqueue-Report-work-funcs-that-trigger-automatic-C.patch
    0007-workqueue-Track-and-monitor-per-workqueue-CPU-time-u.patch

    and also available in the following git branch:

    git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git auto-cpu-intensive-v3

    diffstat follows. Thanks.

    Documentation/core-api/workqueue.rst | 32 +++++
    kernel/sched/core.c | 3
    kernel/workqueue.c | 337 ++++++++++++++++++++++++++++++++++++++++++++++++++-----------
    kernel/workqueue_internal.h | 24 ++--
    lib/Kconfig.debug | 13 ++
    tools/workqueue/wq_monitor.py | 169 ++++++++++++++++++++++++++++++
    6 files changed, 507 insertions(+), 71 deletions(-)

    --
    tejun

    \
     
     \ /
      Last update: 2023-05-11 20:20    [W:2.911 / U:0.348 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site