lkml.org 
[lkml]   [2020]   [Oct]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] kthread_worker: re-set CPU affinities if CPU come online
Date
Petr,

On Tue, Oct 27 2020 at 17:39, Petr Mladek wrote:
> On Mon 2020-10-26 14:52:13, qiang.zhang@windriver.com wrote:
>> From: Zqiang <qiang.zhang@windriver.com>
>>
>> When someone CPU offlined, the 'kthread_worker' which bind this CPU,
>> will run anywhere, if this CPU online, recovery of 'kthread_worker'
>> affinity by cpuhp notifiers.
>
> I am not familiar with CPU hotplug notifiers. I rather add Thomas and
> Peter into Cc.

Thanks!

>> +static int kworker_cpu_online(unsigned int cpu, struct hlist_node *node)
>> +{
>> + struct kthread_worker *worker = hlist_entry(node, struct kthread_worker, cpuhp_node);
>
> The code here looks correct.
>
> JFYI, I was curious why many cpuhp callbacks used hlist_entry_safe().
> But they did not check for NULL. Hence the _safe() variant did
> not really prevented any crash.
>
> I seems that it was a cargo-cult programming. cpuhp_invoke_callback() calls
> simple hlist_for_each(). If I get it correctly, the operations are
> synchronized by cpus_read_lock()/cpus_write_lock() and _safe variant
> really is not needed.

Correct.

>> +static __init int kthread_worker_hotplug_init(void)
>> +{
>> + int ret;
>> +
>> + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "kthread-worker/online",
>> + kworker_cpu_online, NULL);

The dynamic hotplug states run late. What's preventing work to be queued
on such a worker before it is bound to the CPU again?

Nothing at all.

Moving the hotplug state early does not help either because this cannot
happen _before_ the CPUHP_AP_ONLINE state. After that it's already too
late because that's after interrupts have been reenabled on the upcoming
CPU. Depending on the interrupt routing an interrupt hitting the
upcoming CPU might queue work before the state is reached. Work might
also be queued via a timer before rebind happens.

The only current user (powerclamp) has it's own hotplug handling and
stops the thread and creates a new one when the CPU comes online. So
that's not a problem.

But in general this _is_ a problem. There is also no mechanism to ensure
that work on a CPU bound worker has been drained before the CPU goes
offline and that work on the outgoing CPU cannot be queued after a
certain point in the hotplug state machine.

CPU bound kernel threads have special properties. You can access per CPU
variables without further protection. This blows up in your face once
the worker thread is unbound after a hotplug operation.

So the proposed patch is duct tape and papers over the underlying design
problem.

Either this is fixed in a way which ensures operation on the bound CPU
under all circumstances or it needs to be documented that users have to
have their own hotplug handling similar to what powerclamp does.

Thanks,

tglx

\
 
 \ /
  Last update: 2020-10-27 20:21    [W:0.088 / U:1.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site