lkml.org 
[lkml]   [2022]   [Apr]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2] btrfs: zstd: use spin_lock in timer callback
Nikolay Borisov <nborisov@suse.com> writes:

> On 13.04.22 г. 19:03 ч., Schspa Shi wrote:
>> Nikolay Borisov <nborisov@suse.com> writes:
>>
>>> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
>>>> This is an optimization for fix fee13fe96529 ("btrfs:
>>>> correct zstd workspace manager lock to use spin_lock_bh()")
>>>> The critical region for wsm.lock is only accessed by the process context and
>>>> the softirq context.
>>>> Because in the soft interrupt, the critical section will not be preempted by
>>>> the
>>>> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
>>>> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>>>> Changelog:
>>>> v1 -> v2:
>>>> - Change the commit message to make it more readable.
>>>> [1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/
>>>> Signed-off-by: Schspa Shi <schspa@gmail.com>
>>>
>>> Has there been any measurable impact by this change? While it's correct it does mean that
>>> someone looking at the code would see that in one call site we use plain spinlock and in
>>> another a _bh version and this is somewhat inconsistent.
>>>
>> Yes, it may seem a little confused. but it's allowed to save some
>> little peace of CPU times.
>> and "static inline void red_adaptative_timer(struct timer_list *t) in
>> net/sched/sch_red.c"
>> have similar usage.
>>
>>> What's more I believe this is a noop since when softirqs are executing preemptible() would
>>> be false due to preempt_count() being non-0 and in the bh-disabling code
>>> in the spinlock we have:
>>>
>>> /* First entry of a task into a BH disabled section? */
>>> 1 if (!current->softirq_disable_cnt) {
>>> 167 if (preemptible()) {
>>> 1 local_lock(&softirq_ctrl.lock);
>>> 2 /* Required to meet the RCU bottomhalf requirements. */
>>> 3 rcu_read_lock();
>>> 4 } else {
>>> 5 DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>>> 6 }
>>> 7 }
>>>
>>>
>>> In this case we'd hit the else branch.
>> We won't hit the else branch. because current->softirq_disable_cnt
>> won't be zero in the origin case.
>> __do_softirq(void)
>> softirq_handle_begin(void)
>> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
>> current->softirq_disable_cnt will be > 0 at this time.
>
> That's only relevant on CONFIG_PREEMPT_RT though, on usual kernels
> softirq_handle_being is empty. Furthermore, in case of the non-preempt
> rt if preemptible() always returns false this means that even in the
> __do_softirq path we'll never increment softirq_disable_cnt. So if
> anything this change is only beneficial (theoretically at that in preempt_rt
> scenarios).
>
For either case, __local_bh_disable_ip will add preempt count or something else.
for CONFIG_PREEMPT_RT we have discussed, it will be OK and some beneficial.

In the case of CONFIG_TRACE_IRQFLAGS:

#ifdef CONFIG_TRACE_IRQFLAGS
void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
{
unsigned long flags;

WARN_ON_ONCE(in_hardirq());

raw_local_irq_save(flags);
/*
* The preempt tracer hooks into preempt_count_add and will break
* lockdep because it calls back into lockdep after SOFTIRQ_OFFSET
* is set and before current->softirq_enabled is cleared.
* We must manually increment preempt_count here and manually
* call the trace_preempt_off later.
*/
__preempt_count_add(cnt);
/*
* Were softirqs turned off above:
*/
if (softirq_count() == (cnt & SOFTIRQ_MASK))
lockdep_softirqs_off(ip);
raw_local_irq_restore(flags);

if (preempt_count() == cnt) {
#ifdef CONFIG_DEBUG_PREEMPT
current->preempt_disable_ip = get_lock_parent_ip();
#endif
trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip());
}
}
EXPORT_SYMBOL(__local_bh_disable_ip);
#endif /* CONFIG_TRACE_IRQFLAGS */

There is also __preempt_count_add(cnt), local IRQ disable. which
reduces the system's
corresponding speed.

In another case (usual kernels):

#if defined(CONFIG_PREEMPT_RT) || defined(CONFIG_TRACE_IRQFLAGS)
extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
#else
static __always_inline void __local_bh_disable_ip(unsigned long ip,
unsigned int cnt)
{
preempt_count_add(cnt);
barrier();
}
#endif

There is preempt_count_add(cnt), and it's useless in the timer's callback.

In summary:
There is a benefit for all the cases to replace spin_lock_bh with spin_lock in
timer's callback.

>> ......
>> zstd_reclaim_timer_fn(struct timer_list *timer)
>> spin_lock_bh(&wsm.lock);
>> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
>> if (!current->softirq_disable_cnt) {
>> // this if branch won't hit
>> }
>> softirq_handle_end();
>> In this case, the "__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);"
>> won't do anything useful it only
>> increase softirq disable depth and decrease it in
>> "__local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);".
>> So it's safe to replace spin_lock_bh with spin_lock in a timer
>> callback function.
>>
>> For the ksoftirqd, it's all the same.
>>

\
 
 \ /
  Last update: 2022-04-14 19:02    [W:0.065 / U:1.544 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site