lkml.org 
[lkml]   [2022]   [Oct]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 07/11] sched: Add proxy execution
On 10/25/22 07:19, Joel Fernandes wrote:
>
>
> > On Oct 24, 2022, at 6:33 PM, Qais Yousef <qyousef@layalina.io> wrote:
> >
> > On 10/17/22 09:26, Peter Zijlstra wrote:
> >
> >> Additionally, the highest priotiy waiter will get the lock next.
> >
> > True for RT. But for CFS, priority is share and there will be no guarantee
> > the 'highest priority' task will run as soon as the lock is released to
> > grab it, no?
>
> But the mutex lock owner should have done a wake_up in the mutex unlock path,

I admit I'm a bit speculating in terms of how mutexes/futexes will behave.

I thought we could still end up with a situation where all (or some) waiters
could wake up and race to hold the lock. Which now thinking about it is bad
news for RT on SMP systems since lower priority tasks on other CPUs could get
the lock before higher priority ones on another CPU. So maybe this situation
can never happen..

> which is arranged in FIFO order, if I am not mistaken. Subsequently the
> scheduler will at least get a chance to see if the thing that is waiting for
> the lock is of higher priority, at the next preemption point.

Right. If it's always ticketed, then we guarantee the order of who holds the
lock first. But the order of which the lock is held/requested/locked is not the
same as priority. And I am not sure if all variations of locks/configs will
exhibit the same behavior TBH.

Again I haven't looked closely at the patches/code, but I only seen the notion
of priority in rt_mutex; probably something similar was done in the PE patches.

__waiter_prio() in rt_mutex.c returns DEFAULT_PRIO for !rt_prio(). So it seems
for me for CFS the prio will not influence the top_waiter and they will be just
queued at the bottom in-order. For rt_mutex that is; which apparently is
obsolete from PE perspective.

I probably have speculated based on that rt_mutex behavior. Apologies if
I jumped guns.

> If it did not get to run, I don’t think that’s an issue — it was not highest
> priority as far as the scheduler is concerned. No?

For CFS priority === share; but maybe you refer to some implementation detail
I missed here. IOW, for CFS highest priority is the most starved task, which is
influenced by nice, but doesn't guarantee who gets to run next?

> Steve was teaching me some of this code recently, he could chime in :)

Would be good for the rest of us to get enlightened too :-)

>
> > For example I can envisage:
> >
> > +--------+----------------+--------+-------- | p0 | p1
> > | p0 | p1 +--------+----------------+--------+--------
> > ^ ^ ^ ^ ^ | | | | | | | | | Fails
> > to hold the lock holds lock releases lock | and proxy execs for
> > p0 again | | | | tries to
> > hold lock holds lock again proxy execs for p0
> >
> > The notion of priority in CFS as it stands doesn't help in providing any
> > guarantees in who will be able to hold the lock next. I haven't looked at
> > the patches closely, so this might be handled already. I think the
> > situation will be worse if there're more tasks contending for the lock.
> > Priority will influences the chances, but the end result who holds the lock
> > next is effectively random, AFAICT.
>
> The wake up during unlock is FIFO order of waiters though. So that’s
> deterministic.

Deterministic in respecting the order the lock was held. But not deterministic
in terms of the priority of the tasks in that order. IOW, that order is based
on time rather than priority.

Say we have 4 CFS tasks, 3 are priority 0 and one is priority -19. If the
3 tasks @prio_0 try to hold the lock before the one @prio_-19; then the FIFO
order means the higher priority one will be the last to get the lock. And the
other tasks - depending how long they hold the lock for - could cause the
@prio_-19 task to run as a proxy for multiple slices first. Which is the
potential problem I was trying to highlight. The @prio_-19 task will not
appear to have been starved for multiple slices since it runs; but as
a proxy.

Again, maybe there's a devilish detail in the patches that addresses that and
it's not really a problem.

>
> > I had a conversation once with an app developer who came from iOS world and
> > they were confused why their higher priority task is not preempting the
> > lower priority one when they ported it to Android.
> >
> > I wonder sometimes if we need to introduce a true notion of priority for
> > CFS. I don't see why an app developer who would like to create 3 tasks and
> > give them strict priority order relative to each others can't do that. At
> > the moment they have little option in controlling execution order.
>
> I want to talk more about this with you, I am actually working on something
> similar. Let’s talk ;)

Oh. Yes, let us talk! :-)


Cheers

--
Qais Yousef

\
 
 \ /
  Last update: 2022-10-26 00:12    [W:0.208 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site