lkml.org 
[lkml]   [2022]   [Jul]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] MMIO should have more priority then IO
On Fri, Jul 08, 2022 at 06:35:48PM +0000, Nadav Amit wrote:
> On Jul 8, 2022, at 10:55 AM, Matthew Wilcox <willy@infradead.org> wrote:
>
> > ⚠ External Email
> >
> > On Fri, Jul 08, 2022 at 04:45:00PM +0000, Nadav Amit wrote:
> >> On Jul 8, 2022, at 5:56 AM, Matthew Wilcox <willy@infradead.org> wrote:
> >>
> >>> And looking at the results above, it's not so much the PIO vs MMIO
> >>> that makes a difference, it's the virtualisation. A mmio access goes
> >>> from 269ns to 85us. Rather than messing around with preferring MMIO
> >>> over PIO for config space, having an "enlightenment" to do config
> >>> space accesses would be a more profitable path.
> >>
> >> I am unfamiliar with the motivation for this patch, but I just wanted to
> >> briefly regard the advice about enlightments.
> >>
> >> “enlightenment”, AFAIK, is Microsoft’s term for "para-virtualization", so
> >> let’s regard the generic term. I think that you consider the bare-metal
> >> results as the possible results from a paravirtual machine, which is mostly
> >> wrong. Para-virtualization usually still requires a VM-exit and for the most
> >> part the hypervisor/host runs similar code for MMIO/hypercall (conceptually;
> >> the code of paravirtual and fully-virtual devices is often different, but
> >> IIUC, this is not what Ajay measured).
> >>
> >> Para-virtualization could have *perhaps* helped to reduce the number of
> >> PIO/MMIO and improve performance this way. If, for instance, all the
> >> PIO/MMIO are done during initialization, a paravirtual interface can be use
> >> to batch them together, and that would help. But it is more complicated to
> >> get a performance benefit from paravirtualization if the PIO/MMIO accesses
> >> are “spread”, for instance, done after each interrupt.
> >
> > What kind of lousy programming interface requires you to do a config
> > space access after every interrupt? This is looney-tunes.
>
> Wild example, hence the “for instance”.

Stupid example that doesn't help.

> > You've used a lot of words to not answer the question that was so
> > important that I asked it twice. What's the use case, what's the
> > workload that would benefit from this patch?
>
> Well, you used a lot of words to say “it causes problems” without saying
> which. It appeared you have misconceptions about paravirtualization that
> I wanted to correct.

Well now, that's some bullshit. I did my fucking research. I went
back 14+ years in history to figure out what was going on back then.
I cited commit IDs. You're just tossing off some opinions.

I have no misconceptions about whatever you want to call the mechanism
for communicating with the hypervisor at a higher level than "prod this
byte". For example, one of the more intensive things we use config
space for is sizing BARs. If we had a hypercall to siz a BAR, that
would eliminate:

- Read current value from BAR
- Write all-ones to BAR
- Read new value from BAR
- Write original value back to BAR

Bingo, one hypercall instead of 4 MMIO or 8 PIO accesses.

Just because I don't use your terminology, you think I have
"misconceptions"? Fuck you, you condescending piece of shit.

> As I said before, I am not familiar with the exact motivation for this
> patch. I now understood from Ajay that it shortens VM boot time
> considerably.

And yet, no numbers. Yes, microbenchmark numbers that provde nothing,
but no numbers about how much it improves boot time.

> I was talking to Ajay to see if there is a possibility of a VMware specific
> solution. I am afraid that init_hypervisor_platform() might take place too
> late.
>

\
 
 \ /
  Last update: 2022-07-08 20:46    [W:0.106 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site