lkml.org 
[lkml]   [2021]   [Oct]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: Unbounded priority inversion while assigning tasks into cgroups.
Op do 28 okt. 2021 om 10:46 schreef Sebastian Andrzej Siewior
<bigeasy@linutronix.de>:
>
> On 2021-10-27 22:54:33 [+0200], Ronny Meeus wrote:
> > > From a looking at percpu_rw_semaphore implementation, no new readers are
> > > allowed as long as there is a writer pending. The writer has
> > > (unfortunately) to wait until all readers are out. But then I doubt that
> > > this takes up to two minutes for all existing readers to leave the
> > > critical section.
> > >
> >
> > The readers can be running at low priority while there can be other threads
> > with a medium priority will consume the complete cpu. So the low prio
> > readers are just waiting to be scheduled and by that also block the high
> > prio thread.
>
> Hmm. So you have say, 5 reads stuck in the RW semaphore while preempted
> be medium tasks and high-prio writer is then stuck on semaphore, waiting
> for the MED tasks to finish so the low-prio threads can leave the
> criticial section?

Correct. Note that 1 thread stuck in the read is already sufficient to
get into this.
Most of the heavy processing is done at medium priority and the
background tasks are running at the low priority.
Since the background tasks are implemented by scripts, a lot of
accesses to the read part are done at low prio.

> > Looking at v4.9.84, at least the RT implementation of rw_semaphore
> > > allows new readers if a writer is pending. So this could be culprit as
> > > you would have to wait until all reader are gone and the writer needs to
> > > grab the lock before another reader shows up. But then this shouldn't be
> > > the case for the generic implementation and new reader should wait until
> > > the writer got its chance.
> > >
> >
> > So what do you suggest for the v4.9 kernel as a solution? Move to the RT
> > version of the rw_semaphore and hope for the best?
>
> I don't think it will help. Based on what you wrote above it appears
> that the problem is that the readers are preempted and are not leaving
> the critical section soon enough.
>
> How many CPUs do you have? Maybe using a rtmutex here and allowing only
> one reader at a time isn't that bad in your case. With one CPU for
> instance, there isn't much space for multiple readers I guess.
>

The current system has 1 CPU with 2 cores but we have also devices
with 14 cores on which the impact will be bigger of course.
Note that with the rtmutex solution all accesses (read + write) will
be serialized.

I wonder why other people do not see this issue since it is present in
all kernel versions.
And, especially in systems with strict deadlines, I consider this a
serious issue.

Ronny

> Sebastian

\
 
 \ /
  Last update: 2021-10-29 11:43    [W:0.059 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site