lkml.org 
[lkml]   [2020]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectFull barrier in data_push_tail(): was [PATCH v2 2/3] printk: add lockless buffer
    On Fri 2020-05-01 11:46:09, John Ogness wrote:
    > Introduce a multi-reader multi-writer lockless ringbuffer for storing
    > the kernel log messages. Readers and writers may use their API from
    > any context (including scheduler and NMI). This ringbuffer will make
    > it possible to decouple printk() callers from any context, locking,
    > or console constraints. It also makes it possible for readers to have
    > full access to the ringbuffer contents at any time and context (for
    > example from any panic situation).
    >
    > diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
    > new file mode 100644
    > index 000000000000..e0a66468d4f3
    > --- /dev/null
    > +++ b/kernel/printk/printk_ringbuffer.c
    > +/*
    > + * Advance the data ring tail to at least @lpos. This function puts
    > + * descriptors into the reusable state if the tail is pushed beyond
    > + * their associated data block.
    > + */
    > +static bool data_push_tail(struct printk_ringbuffer *rb,
    > + struct prb_data_ring *data_ring,
    > + unsigned long lpos)
    > +{
    > + unsigned long tail_lpos;
    > + unsigned long next_lpos;
    > +
    > + /* If @lpos is not valid, there is nothing to do. */
    > + if (lpos == INVALID_LPOS)
    > + return true;
    > +
    > + tail_lpos = atomic_long_read(&data_ring->tail_lpos);
    > +
    > + do {
    > + /* Done, if the tail lpos is already at or beyond @lpos. */
    > + if ((lpos - tail_lpos) - 1 >= DATA_SIZE(data_ring))
    > + break;
    > +
    > + /*
    > + * Make all descriptors reusable that are associated with
    > + * data blocks before @lpos.
    > + */
    > + if (!data_make_reusable(rb, data_ring, tail_lpos, lpos,
    > + &next_lpos)) {
    > + /*
    > + * Guarantee the descriptor state loaded in
    > + * data_make_reusable() is performed before reloading
    > + * the tail lpos. The failed data_make_reusable() may
    > + * be due to a newly recycled descriptor causing
    > + * the tail lpos to have been previously pushed. This
    > + * pairs with desc_reserve:D.
    > + *
    > + * Memory barrier involvement:
    > + *
    > + * If data_make_reusable:D reads from desc_reserve:G,
    > + * then data_push_tail:B reads from data_push_tail:D.
    > + *
    > + * Relies on:
    > + *
    > + * MB from data_push_tail:D to desc_reserve:G
    > + * matching
    > + * RMB from data_make_reusable:D to data_push_tail:B
    > + *
    > + * Note: data_push_tail:D and desc_reserve:G can be
    > + * different CPUs. However, the desc_reserve:G
    > + * CPU (which performs the full memory barrier)
    > + * must have previously seen data_push_tail:D.
    > + */
    > + smp_rmb(); /* LMM(data_push_tail:A) */
    > +
    > + next_lpos = atomic_long_read(&data_ring->tail_lpos
    > + ); /* LMM(data_push_tail:B) */
    > + if (next_lpos == tail_lpos)
    > + return false;
    > +
    > + /* Another task pushed the tail. Try again. */
    > + tail_lpos = next_lpos;
    > + continue;
    > + }
    > +
    > + /*
    > + * Guarantee any descriptor states that have transitioned to
    > + * reusable are stored before pushing the tail lpos. This
    > + * allows readers to identify if data has expired while
    > + * reading the descriptor. This pairs with desc_read:D.
    > + */
    > + smp_mb(); /* LMM(data_push_tail:C) */

    The comment does not explain why we need a full barrier here. It talks
    about writing descriptor states. It suggests that write barrier should
    be enough.

    I guess that this is related to the discussion that we had last time,
    and the litmus test mentioned in
    see https://lore.kernel.org/r/87h7zcjkxy.fsf@linutronix.de

    I would add something like:

    * Full barrier is necessary because the descriptors
    * might have been made reusable also by other CPUs.

    For people like me, it would be great to add also link to a more
    detailed explanation, for example, the litmus tests, or something
    even more human readable ;-) I think that it is a "rather" common
    problem. I wonder whether it is already documented somewhere.

    > + } while (!atomic_long_try_cmpxchg_relaxed(&data_ring->tail_lpos,
    > + &tail_lpos, next_lpos)); /* LMM(data_push_tail:D) */
    > +
    > + return true;
    > +}
    > +

    Best Regards,
    Petr

    \
     
     \ /
      Last update: 2020-06-09 11:48    [W:3.213 / U:0.352 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site