lkml.org 
[lkml]   [2022]   [Sep]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v6 1/4] rcu: Make call_rcu() lazy to save power
Date


> On Sep 26, 2022, at 1:33 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Mon, Sep 26, 2022 at 03:04:38PM +0000, Joel Fernandes wrote:
>>> On Mon, Sep 26, 2022 at 12:00:45AM +0200, Frederic Weisbecker wrote:
>>> On Sat, Sep 24, 2022 at 09:00:39PM -0400, Joel Fernandes wrote:
>>>>
>>>>
>>>>> On Sep 24, 2022, at 7:28 PM, Joel Fernandes <joel@joelfernandes.org> wrote:
>>>>>
>>>>> Hi Frederic, thanks for the response, replies
>>>>> below courtesy fruit company’s device:
>>>>>
>>>>>>> On Sep 24, 2022, at 6:46 PM, Frederic Weisbecker <frederic@kernel.org> wrote:
>>>>>>>
>>>>>>> On Thu, Sep 22, 2022 at 10:01:01PM +0000, Joel Fernandes (Google) wrote:
>>>>>>> @@ -3902,7 +3939,11 @@ static void rcu_barrier_entrain(struct rcu_data *rdp)
>>>>>>> rdp->barrier_head.func = rcu_barrier_callback;
>>>>>>> debug_rcu_head_queue(&rdp->barrier_head);
>>>>>>> rcu_nocb_lock(rdp);
>>>>>>> - WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies));
>>>>>>> + /*
>>>>>>> + * Flush the bypass list, but also wake up the GP thread as otherwise
>>>>>>> + * bypass/lazy CBs maynot be noticed, and can cause real long delays!
>>>>>>> + */
>>>>>>> + WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, FLUSH_BP_WAKE));
>>>>>>
>>>>>> This fixes an issue that goes beyond lazy implementation. It should be done
>>>>>> in a separate patch, handling rcu_segcblist_entrain() as well, with "Fixes: " tag.
>>>>>
>>>>> I wanted to do that, however on discussion with
>>>>> Paul I thought of making this optimization only for
>>>>> all lazy bypass CBs. That makes it directly related
>>>>> this patch since the laziness notion is first
>>>>> introduced here. On the other hand I could make
>>>>> this change in a later patch since we are not
>>>>> super bisectable anyway courtesy of the last
>>>>> patch (which is not really an issue if the CONFIG
>>>>> is kept off during someone’s bisection.
>>>>
>>>> Or are we saying it’s worth doing the wake up for rcu barrier even for
>>>> regular bypass CB? That’d save 2 jiffies on rcu barrier. If we agree it’s
>>>> needed, then yes splitting the patch makes sense.
>>>>
>>>> Please let me know your opinions, thanks,
>>>>
>>>> - Joel
>>>
>>> Sure, I mean since we are fixing the buggy rcu_barrier_entrain() anyway, let's
>>> just fix bypass as well. Such as in the following (untested):
>>
>> Got it. This sounds good to me, and will simplify the code a bit more for sure.
>>
>> I guess a question for Paul - are you Ok with rcu_barrier() causing wake ups
>> if the bypass list has any non-lazy CBs as well? That should be OK, IMO.
>
> In theory, I am OK with it. In practice, you are the guys with the
> hardware that can measure power consumption, not me! ;-)

Ok I’ll do it this way and I’ll add Frederic’s Suggested-by tag. About power, I have already measured and it has no effect on power that I could find.

Thanks!

- Joel



>
>>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>>> index b39e97175a9e..a0df964abb0e 100644
>>> --- a/kernel/rcu/tree.c
>>> +++ b/kernel/rcu/tree.c
>>> @@ -3834,6 +3834,8 @@ static void rcu_barrier_entrain(struct rcu_data *rdp)
>>> {
>>> unsigned long gseq = READ_ONCE(rcu_state.barrier_sequence);
>>> unsigned long lseq = READ_ONCE(rdp->barrier_seq_snap);
>>> + bool wake_nocb = false;
>>> + bool was_alldone = false;
>>>
>>> lockdep_assert_held(&rcu_state.barrier_lock);
>>> if (rcu_seq_state(lseq) || !rcu_seq_state(gseq) || rcu_seq_ctr(lseq) != rcu_seq_ctr(gseq))
>>> @@ -3842,6 +3844,8 @@ static void rcu_barrier_entrain(struct rcu_data *rdp)
>>> rdp->barrier_head.func = rcu_barrier_callback;
>>> debug_rcu_head_queue(&rdp->barrier_head);
>>> rcu_nocb_lock(rdp);
>>> + if (rcu_rdp_is_offloaded(rdp) && !rcu_segcblist_pend_cbs(&rdp->cblist))
>>> + was_alldone = true;
>>> WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies));
>>> if (rcu_segcblist_entrain(&rdp->cblist, &rdp->barrier_head)) {
>>> atomic_inc(&rcu_state.barrier_cpu_count);
>>> @@ -3849,7 +3853,12 @@ static void rcu_barrier_entrain(struct rcu_data *rdp)
>>> debug_rcu_head_unqueue(&rdp->barrier_head);
>>> rcu_barrier_trace(TPS("IRQNQ"), -1, rcu_state.barrier_sequence);
>>> }
>>> + if (was_alldone && rcu_segcblist_pend_cbs(&rdp->cblist))
>>> + wake_nocb = true;
>>> rcu_nocb_unlock(rdp);
>>> + if (wake_nocb)
>>> + wake_nocb_gp(rdp, false);
>>> +
>>
>> Thanks for the code snippet, I like how you are checking if the bypass list
>> is empty, without actually checking it ;-)
>
> That certainly is consistent with the RCU philosophy. :-)
>
> Thanx, Paul

\
 
 \ /
  Last update: 2022-09-27 01:38    [W:0.150 / U:1.456 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site