lkml.org 
[lkml]   [2019]   [Feb]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH v1 07/25] printk-rb: add functionality required by printk
On Tue 2019-02-19 23:08:20, John Ogness wrote:
> On 2019-02-18, Petr Mladek <pmladek@suse.com> wrote:
> >> The printk subsystem needs to be able to query the size of the ring
> >> buffer, seek to specific entries within the ring buffer, and track
> >> if records could not be stored in the ring buffer.
> >>
> >> diff --git a/lib/printk_ringbuffer.c b/lib/printk_ringbuffer.c
> >> index c2ddf4cb9f92..ce33b5add5a1 100644
> >> --- a/lib/printk_ringbuffer.c
> >> +++ b/lib/printk_ringbuffer.c
> >> @@ -175,11 +175,16 @@ void prb_commit(struct prb_handle *h)
> >> head = PRB_WRAP_LPOS(rb, head, 1);
> >> continue;
> >> }
> >> + while (atomic_long_read(&rb->lost)) {
> >> + atomic_long_dec(&rb->lost);
> >> + rb->seq++;
>
> > On the contrary, the patch adding support for lost messages
> > should implement a way how to inform the user about lost messages.
> > E.g. to add a warning when some space becomes available again.
>
> The readers will see that messages were lost. I think that is enough. I
> don't know how useful it would be to notify writers that space is
> available. The writers are holding the prb_cpulock, so they definitely
> shouldn't be waiting around for anything.

I see your intention. Well, it forces all readers to implement the
check and write a message. It might be fine if the code can be shared.

My original idea was the following. If any next writer succeeds
to reserve space for its own message. It would try to reserve
space also for the warning. If it succeeds, it would just
write the warning there (like a nested context or so). Then
all readers would get the warning for free.

But you inspired me to antoher idea. We could hadle this in
the krb_iter_get_next_entry() calls. They could fill
the given buffer with the warning message when they
detect missed messages. The real message might get
added into the same buffer. Or we might add a flag
into struct prb_iter so that the reader would need
to call krb_iter_get_next_entry() to get the real
message on the current lpos.

Both solutions allow to get the warnings trasparently.
There will be no duplicated extra code. Also all readers
would handle it consistently.

But there is a difference:

If we store the warning into the ring buffer directly
then we do not need to store the seq number. I mean
that we would not need to bump seq when reservation failed.
The amount of lost messages is handled by another
counter anyway.

On the other hand, using the fake messages would allow
to handle transparently even the messages lost by readers.
I mean that krb_iter_get_next_valid_entry() might fill
the given buffer with a warning when the next message
was overwriten and it had to reset lpos to tail.

I am not sure what is better out of my head. I would need
to play with the code.


> This situation should be quite rare because it means the _entire_ ring
> buffer was filled up by an NMI context that interrupted a context that
> was in the reserve/commit window. NMI contexts probably should not be
> doing _so_ much printk'ing within a single NMI.

Sure.

Best Regards,
Petr

\
 
 \ /
  Last update: 2019-02-22 10:59    [W:1.037 / U:2.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site