lkml.org 
[lkml]   [2023]   [May]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH v5 10/18] watchdog/hardlockup: Add a "cpu" param to watchdog_hardlockup_check()
    On Fri 2023-05-19 10:18:34, Douglas Anderson wrote:
    > In preparation for the buddy hardlockup detector where the CPU
    > checking for lockup might not be the currently running CPU, add a
    > "cpu" parameter to watchdog_hardlockup_check().
    >
    > As part of this change, make hrtimer_interrupts an atomic_t since now
    > the CPU incrementing the value and the CPU reading the value might be
    > different. Technially this could also be done with just READ_ONCE and
    > WRITE_ONCE, but atomic_t feels a little cleaner in this case.
    >
    > While hrtimer_interrupts is made atomic_t, we change
    > hrtimer_interrupts_saved from "unsigned long" to "int". The "int" is
    > needed to match the data type backing atomic_t for hrtimer_interrupts.
    > Even if this changes us from 64-bits to 32-bits (which I don't think
    > is true for most compilers), it doesn't really matter. All we ever do
    > is increment it every few seconds and compare it to an old value so
    > 32-bits is fine (even 16-bits would be). The "signed" vs "unsigned"
    > also doesn't matter for simple equality comparisons.
    >
    > hrtimer_interrupts_saved is _not_ switched to atomic_t nor even
    > accessed with READ_ONCE / WRITE_ONCE. The hrtimer_interrupts_saved is
    > always consistently accessed with the same CPU. NOTE: with the
    > upcoming "buddy" detector there is one special case. When a CPU goes
    > offline/online then we can change which CPU is the one to consistently
    > access a given instance of hrtimer_interrupts_saved. We still can't
    > end up with a partially updated hrtimer_interrupts_saved, however,
    > because we end up petting all affected CPUs to make sure the new and
    > old CPU can't end up somehow read/write hrtimer_interrupts_saved at
    > the same time.
    >
    > --- a/kernel/watchdog.c
    > +++ b/kernel/watchdog.c
    > @@ -87,29 +87,34 @@ __setup("nmi_watchdog=", hardlockup_panic_setup);
    >
    > #if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
    >
    > -static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts);
    > -static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
    > +static DEFINE_PER_CPU(atomic_t, hrtimer_interrupts);
    > +static DEFINE_PER_CPU(int, hrtimer_interrupts_saved);
    > static DEFINE_PER_CPU(bool, watchdog_hardlockup_warned);
    > static unsigned long watchdog_hardlockup_all_cpu_dumped;
    >
    > -static bool is_hardlockup(void)
    > +static bool is_hardlockup(unsigned int cpu)
    > {
    > - unsigned long hrint = __this_cpu_read(hrtimer_interrupts);
    > + int hrint = atomic_read(&per_cpu(hrtimer_interrupts, cpu));
    >
    > - if (__this_cpu_read(hrtimer_interrupts_saved) == hrint)
    > + if (per_cpu(hrtimer_interrupts_saved, cpu) == hrint)
    > return true;
    >
    > - __this_cpu_write(hrtimer_interrupts_saved, hrint);
    > + /*
    > + * NOTE: we don't need any fancy atomic_t or READ_ONCE/WRITE_ONCE
    > + * for hrtimer_interrupts_saved. hrtimer_interrupts_saved is
    > + * written/read by a single CPU.
    > + */
    > + per_cpu(hrtimer_interrupts_saved, cpu) = hrint;
    >
    > return false;
    > }
    >
    > static void watchdog_hardlockup_kick(void)
    > {
    > - __this_cpu_inc(hrtimer_interrupts);
    > + atomic_inc(raw_cpu_ptr(&hrtimer_interrupts));

    Is there any particular reason why raw_*() is needed, please?

    My expectation is that the raw_ API should be used only when
    there is a good reason for it. Where a good reason might be
    when the checks might fail but the consistency is guaranteed
    another way.

    IMHO, we should use:

    atomic_inc(this_cpu_ptr(&hrtimer_interrupts));

    To be honest, I am a bit lost in the per_cpu API definitions.

    But this_cpu_ptr() seems to be implemented the same way as
    per_cpu_ptr() when CONFIG_DEBUG_PREEMPT is enabled.
    And we use per_cpu_ptr() in is_hardlockup().

    Also this_cpu_ptr() is used more commonly:

    $> git grep this_cpu_ptr | wc -l
    1385
    $> git grep raw_cpu_ptr | wc -l
    114

    > }
    >
    > -void watchdog_hardlockup_check(struct pt_regs *regs)
    > +void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs)
    > {
    > /*
    > * Check for a hardlockup by making sure the CPU's timer
    > @@ -117,35 +122,42 @@ void watchdog_hardlockup_check(struct pt_regs *regs)
    > * fired multiple times before we overflow'd. If it hasn't
    > * then this is a good indication the cpu is stuck
    > */
    > - if (is_hardlockup()) {
    > + if (is_hardlockup(cpu)) {
    > unsigned int this_cpu = smp_processor_id();
    > + struct cpumask backtrace_mask = *cpu_online_mask;

    Does this work, please?

    IMHO, we should use cpumask_copy().

    >
    > /* Only print hardlockups once. */
    > - if (__this_cpu_read(watchdog_hardlockup_warned))
    > + if (per_cpu(watchdog_hardlockup_warned, cpu))
    > return;
    >

    Otherwise, it looks good to me.

    Best Regards,
    Petr

    \
     
     \ /
      Last update: 2023-05-23 18:03    [W:4.079 / U:0.112 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site