lkml.org 
[lkml]   [2022]   [Jun]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 29/33] PCI/P2PDMA: Convert to printbuf


On 2022-06-08 17:24, Kent Overstreet wrote:
> On 6/8/22 17:11, Bjorn Helgaas wrote:
>> [+cc Logan, maintainer of p2pdma.c]
>>
>> On Sat, Jun 04, 2022 at 03:30:38PM -0400, Kent Overstreet wrote:
>>> This converts from seq_buf to printbuf. We're using printbuf in external
>>> buffer mode, so it's a direct conversion, aside from some trivial
>>> refactoring in cpu_show_meltdown() to make the code more consistent.
>>
>> cpu_show_meltdown() doesn't appear in p2pdma.c.  Leftover from another
>> patch?  Maybe from 27/33 ("powerpc: Convert to printbuf")?
>>
>> I'm not opposed to this, but it would be nice to say what the benefit
>> is.  How is printbuf better than seq_buf?  It's not obvious from the
>> patch how this is better/safer/shorter/etc.
>>
>> Even the cover letter [1] is not very clear about the benefit.  Yes, I
>> see it has something to do with improving buffer management, and I
>> know from experience that's a pain.  Concrete examples of typical
>> printbuf usage and bugs that printbufs avoid would be helpful.
>
> Take a look at the vsprintf.c conversion if you want to see big
> improvements. Also, %pf() is another thing that's going to enable a lot
> more improvements.

IMHO I'm not sure how these benefits are a result of what looks largely
like a rewrite and rename of seq_buf... Seems to me like it should be
possible to stick with seq_buf and add features to it instead of doing a
replace and remove. As I understand the kernel community, that is always
the preferred practice and would certainly reduce a lot of churn in this
series. But I haven't looked at the entire series and it's not really
something I'm responsible for, so take my opinion with a grain of salt.

>> I guess "external buffer mode" means we use an existing buffer (on the
>> stack in this case) instead of allocating a buffer from the heap [2]?
>> And we do that for performance (i.e., we know the max size) and to
>> avoid sleeping to alloc?
>
> I did it that way because I didn't want to touch unrelated code more
> than was necessary - just doing a direct conversion. Heap allocation
> would probably make sense here, but it's not my code.

It was changed to a heap allocation recently because my pending patch
set will add a path where this code is called in an atomic context and
cannot sleep. Simplest solution was stack allocation instead of tracking
GFP context for the atomic path.

Logan

\
 
 \ /
  Last update: 2022-06-09 01:37    [W:0.385 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site