lkml.org 
[lkml]   [2022]   [Dec]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC 0/2] srcu: Remove pre-flip memory barrier
On Tue, Dec 20, 2022 at 01:34:43PM +0100, Frederic Weisbecker wrote:
> On Mon, Dec 19, 2022 at 11:07:17PM -0500, Joel Fernandes wrote:
> > On Sun, Dec 18, 2022 at 2:13 PM Joel Fernandes (Google)
> > <joel@joelfernandes.org> wrote:
> > >
> > > Hello, I believe the pre-flip memory barrier is not required. The only reason I
> > > can say to remove it, other than the possibility that it is unnecessary, is to
> > > not have extra code that does not help. However, since we are issuing a fully
> > > memory-barrier after the flip, I cannot say that it hurts to do it anyway.
> > >
> > > For this reason, please consider these patches as "informational", than a
> > > "please merge". :-) Though, feel free to consider merging if you agree!
> > >
> > > All SRCU scenarios pass with these, with 6 hours of testing.
> > >
> > > thanks,
> > >
> > > - Joel
> > >
> > > Joel Fernandes (Google) (2):
> > > srcu: Remove comment about prior read lock counts
> > > srcu: Remove memory barrier "E" as it is not required
> >
> > And litmus tests confirm that "E" does not really do what the comments
> > say, PTAL:
> > Test 1:
> > C mbe
> > (*
> > * Result: sometimes
> > * Does previous scan see old reader's lock count, if a new reader saw
> > the new srcu_idx?
> > *)
> >
> > {}
> >
> > P0(int *lockcount, int *srcu_idx) // updater
> > {
> > int r0;
> > r0 = READ_ONCE(*lockcount);
> > smp_mb(); // E
> > WRITE_ONCE(*srcu_idx, 1);
> > }
> >
> > P1(int *lockcount, int *srcu_idx) // reader
> > {
> > int r0;
> > WRITE_ONCE(*lockcount, 1); // previous reader
> > smp_mb(); // B+C
> > r0 = READ_ONCE(*srcu_idx); // new reader
> > }
> > exists (0:r0=0 /\ 1:r0=1) (* Bad outcome. *)
> >
> > Test 2:
> > C mbe2
> >
> > (*
> > * Result: sometimes
> > * If updater saw reader's lock count, was that reader using the old idx?
> > *)
> >
> > {}
> >
> > P0(int *lockcount, int *srcu_idx) // updater
> > {
> > int r0;
> > r0 = READ_ONCE(*lockcount);
> > smp_mb(); // E
> > WRITE_ONCE(*srcu_idx, 1);
> > }
> >
> > P1(int *lockcount, int *srcu_idx) // reader
> > {
> > int r0;
> > int r1;
> > r1 = READ_ONCE(*srcu_idx); // previous reader
> > WRITE_ONCE(*lockcount, 1); // previous reader
> > smp_mb(); // B+C
> > r0 = READ_ONCE(*srcu_idx); // new reader
> > }
> > exists (0:r0=1 /\ 1:r1=1) (* Bad outcome. *)
>
> Actually, starring at this some more, there is some form of dependency
> on the idx in order to build the address where the reader must write the
> lockcount to. Litmus doesn't support arrays but assuming that
> &ssp->sda->srcu_lock_count == 0 (note the & in the beginning), it
> could be modelized that way (I'm eluding the unlock part to simplify):
>
> ---
> C w-depend-r
>
> {
> PLOCK=LOCK0;
> }
>
> // updater
> P0(int *LOCK0, int *LOCK1, int **PLOCK)
> {
> int lock1;
>
> lock1 = READ_ONCE(*LOCK1); // READ from inactive idx
> smp_mb();
> WRITE_ONCE(*PLOCK, LOCK1); // Flip idx
> }
>
> // reader
> P1(int **PLOCK)
> {
> int *plock;
>
> plock = READ_ONCE(*PLOCK); // Read active idx
> WRITE_ONCE(*plock, 1); // Write to active idx
> }
>
> exists (0:lock0=1) // never happens

That's lock1=1, lemme do it again:

C w-depend-r

{
PLOCK=LOCK0;
}

// updater
P0(int *LOCK1, int **PLOCK)
{
int lock1;

lock1 = READ_ONCE(*LOCK1); // READ from inactive idx
smp_mb();
WRITE_ONCE(*PLOCK, LOCK1); // Flip idx
}

// reader
P1(int **PLOCK)
{
int *plock;

plock = READ_ONCE(*PLOCK); // Read active idx
WRITE_ONCE(*plock, 1); // Write to active idx
}

exists (0:lock1=1) (* never *)

\
 
 \ /
  Last update: 2023-03-26 23:15    [W:0.088 / U:0.728 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site