lkml.org 
[lkml]   [2023]   [Nov]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [RFC PATCH 68/86] treewide: mm: remove cond_resched()
Date

Yosry Ahmed <yosryahmed@google.com> writes:

> On Tue, Nov 7, 2023 at 11:49 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>>
>> On 11/8/23 02:28, Sergey Senozhatsky wrote:
>> > On (23/11/07 15:08), Ankur Arora wrote:
>> > [..]
>> >> +++ b/mm/zsmalloc.c
>> >> @@ -2029,7 +2029,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>> >> dst_zspage = NULL;
>> >>
>> >> spin_unlock(&pool->lock);
>> >> - cond_resched();
>> >> spin_lock(&pool->lock);
>> >> }
>> >> }
>> >
>> > I'd personally prefer to have a comment explaining why we do that
>> > spin_unlock/spin_lock sequence, which may look confusing to people.
>>
>> Wonder if it would make sense to have a lock operation that does the
>> unlock/lock as a self-documenting thing, and maybe could also be optimized
>> to first check if there's a actually a need for it (because TIF_NEED_RESCHED
>> or lock is contended).
>
> +1, I was going to suggest this as well. It can be extended to other
> locking types that disable preemption as well like RCU. Something like
> spin_lock_relax() or something.

Good point. We actually do have exactly that: cond_resched_lock(). (And
similar RW lock variants.)

>> > Maybe would make sense to put a nice comment in all similar cases.
>> > For instance:
>> >
>> > rcu_read_unlock();
>> > - cond_resched();
>> > rcu_read_lock();

And we have this construct as well: cond_resched_rcu().

I can switch to the alternatives when I send out the next version of
the series.

Thanks

--
ankur

\
 
 \ /
  Last update: 2023-11-20 13:52    [W:0.122 / U:1.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site