lkml.org 
[lkml]   [2022]   [Apr]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v2] btrfs: zstd: use spin_lock in timer callback
From


On 13.04.22 г. 19:03 ч., Schspa Shi wrote:
> Nikolay Borisov <nborisov@suse.com> writes:
>
>> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
>>> This is an optimization for fix fee13fe96529 ("btrfs:
>>> correct zstd workspace manager lock to use spin_lock_bh()")
>>> The critical region for wsm.lock is only accessed by the process context and
>>> the softirq context.
>>> Because in the soft interrupt, the critical section will not be preempted by
>>> the
>>> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
>>> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>>> Changelog:
>>> v1 -> v2:
>>> - Change the commit message to make it more readable.
>>> [1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/
>>> Signed-off-by: Schspa Shi <schspa@gmail.com>
>>
>> Has there been any measurable impact by this change? While it's correct it does mean that
>> someone looking at the code would see that in one call site we use plain spinlock and in
>> another a _bh version and this is somewhat inconsistent.
>>
> Yes, it may seem a little confused. but it's allowed to save some
> little peace of CPU times.
> and "static inline void red_adaptative_timer(struct timer_list *t) in
> net/sched/sch_red.c"
> have similar usage.
>
>> What's more I believe this is a noop since when softirqs are executing preemptible() would
>> be false due to preempt_count() being non-0 and in the bh-disabling code
>> in the spinlock we have:
>>
>> /* First entry of a task into a BH disabled section? */
>> 1 if (!current->softirq_disable_cnt) {
>> 167 if (preemptible()) {
>> 1 local_lock(&softirq_ctrl.lock);
>> 2 /* Required to meet the RCU bottomhalf requirements. */
>> 3 rcu_read_lock();
>> 4 } else {
>> 5 DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>> 6 }
>> 7 }
>>
>>
>> In this case we'd hit the else branch.
>
> We won't hit the else branch. because current->softirq_disable_cnt
> won't be zero in the origin case.
>
> __do_softirq(void)
> softirq_handle_begin(void)
> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
> current->softirq_disable_cnt will be > 0 at this time.

That's only relevant on CONFIG_PREEMPT_RT though, on usual kernels
softirq_handle_being is empty. Furthermore, in case of the non-preempt
rt if preemptible() always returns false this means that even in the
__do_softirq path we'll never increment softirq_disable_cnt. So if
anything this change is only beneficial (theoretically at that in
preempt_rt scenarios).

> ......
> zstd_reclaim_timer_fn(struct timer_list *timer)
> spin_lock_bh(&wsm.lock);
> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
> if (!current->softirq_disable_cnt) {
> // this if branch won't hit
> }
>
> softirq_handle_end();
>
> In this case, the "__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);"
> won't do anything useful it only
> increase softirq disable depth and decrease it in
> "__local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);".
>
> So it's safe to replace spin_lock_bh with spin_lock in a timer
> callback function.
>
>
> For the ksoftirqd, it's all the same.
>

\
 
 \ /
  Last update: 2022-04-13 21:13    [W:0.177 / U:0.300 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site