lkml.org 
[lkml]   [2022]   [Jun]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v2 13/13] perf/hw_breakpoint: Optimize toggle_bp_slot() for CPU-independent task targets
    On Tue, 28 Jun 2022 at 17:45, Dmitry Vyukov <dvyukov@google.com> wrote:
    >
    > On Tue, 28 Jun 2022 at 11:59, Marco Elver <elver@google.com> wrote:
    > >
    > > We can still see that a majority of the time is spent hashing task pointers:
    > >
    > > ...
    > > 16.98% [kernel] [k] rhashtable_jhash2
    > > ...
    > >
    > > Doing the bookkeeping in toggle_bp_slots() is currently O(#cpus),
    > > calling task_bp_pinned() for each CPU, even if task_bp_pinned() is
    > > CPU-independent. The reason for this is to update the per-CPU
    > > 'tsk_pinned' histogram.
    > >
    > > To optimize the CPU-independent case to O(1), keep a separate
    > > CPU-independent 'tsk_pinned_all' histogram.
    > >
    > > The major source of complexity are transitions between "all
    > > CPU-independent task breakpoints" and "mixed CPU-independent and
    > > CPU-dependent task breakpoints". The code comments list all cases that
    > > require handling.
    > >
    > > After this optimization:
    > >
    > > | $> perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512
    > > | Total time: 1.758 [sec]
    > > |
    > > | 34.336621 usecs/op
    > > | 4395.087500 usecs/op/cpu
    > >
    > > 38.08% [kernel] [k] queued_spin_lock_slowpath
    > > 10.81% [kernel] [k] smp_cfm_core_cond
    > > 3.01% [kernel] [k] update_sg_lb_stats
    > > 2.58% [kernel] [k] osq_lock
    > > 2.57% [kernel] [k] llist_reverse_order
    > > 1.45% [kernel] [k] find_next_bit
    > > 1.21% [kernel] [k] flush_tlb_func_common
    > > 1.01% [kernel] [k] arch_install_hw_breakpoint
    > >
    > > Showing that the time spent hashing keys has become insignificant.
    > >
    > > With the given benchmark parameters, that's an improvement of 12%
    > > compared with the old O(#cpus) version.
    > >
    > > And finally, using the less aggressive parameters from the preceding
    > > changes, we now observe:
    > >
    > > | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64
    > > | Total time: 0.067 [sec]
    > > |
    > > | 35.292187 usecs/op
    > > | 2258.700000 usecs/op/cpu
    > >
    > > Which is an improvement of 12% compared to without the histogram
    > > optimizations (baseline is 40 usecs/op). This is now on par with the
    > > theoretical ideal (constraints disabled), and only 12% slower than no
    > > breakpoints at all.
    > >
    > > Signed-off-by: Marco Elver <elver@google.com>
    >
    > Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
    >
    > I don't see any bugs. But the code is quite complex. Does it make
    > sense to add some asserts to the histogram type? E.g. counters don't
    > underflow, weight is not negative (e.g. accidentally added -1 returned
    > from task_bp_pinned()). Not sure if it will be enough to catch all
    > types of bugs, though.
    > Could kunit tests check that histograms are all 0's at the end?
    >
    > I am not just about the current code (which may be correct), but also
    > future modifications to this code.

    I'll think of some more options.

    bp_slots_histogram_max*() already has asserts (WARN about underflow;
    some with KCSAN help).

    The main thing I did to raise my own confidence in the code is inject
    bugs and see if the KUnit test catches it. If it didn't I extended the
    tests. I'll do that some more maybe.

    \
     
     \ /
      Last update: 2022-06-28 18:05    [W:2.487 / U:0.188 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site