lkml.org 
[lkml]   [2021]   [Jul]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCHv2 2/4] arm64: add guest pvstate support
On Wed, 21 Jul 2021 09:47:52 +0100,
Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
>
> On (21/07/21 09:22), Marc Zyngier wrote:
> > On Wed, 21 Jul 2021 03:05:25 +0100,
> > Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
> > >
> > > On (21/07/12 16:42), Marc Zyngier wrote:
> > > > >
> > > > > PV-vcpu-state is a per-CPU struct, which, for the time being,
> > > > > holds boolean `preempted' vCPU state. During the startup,
> > > > > given that host supports PV-state, each guest vCPU sends
> > > > > a pointer to its per-CPU variable to the host as a payload
> > > >
> > > > What is the expected memory type for this memory region? What is its
> > > > life cycle? Where is it allocated from?
> > >
> > > Guest per-CPU area, which physical addresses is shared with the
> > > host.
> >
> > Again: what are the memory types you expect this to be used with?
>
> I heard your questions, I'm trying to figure out the answers now.
>
> As of memory type - I presume you are talking about coherent vs
> non-coherent memory.

No. I'm talking about cacheable vs non-cacheable. The ARM architecture
is always coherent for memory that is inner-shareable, which applies
to any system running Linux. On the other hand, there is no
architected cache snooping when using non-cacheable accesses.

> Can guest per-CPU memory be non-coherent? Guest never writes
> anything to the region of memory it shares with the host, it only
> reads what the host writes to it. All reads and writes are done from
> CPU (no devices DMA access, etc).
>
> Do we need any cache flushes/syncs in this case?

If you expect the guest to have non-cacheable mappings (or to run with
its MMU off at any point, which amounts to the same thing) *and* still
be able to access the shared page, then *someone* will have to perform
CMOs to make these writes visible to the PoC (unless you have FWB).

Needless to say, this would kill any sort of performance gain this
feature could hypothetically bring. Defining the scope for the access
would help mitigating this, even if that's just a sentence saying "the
shared page *must* be accessed from a cacheable mapping".

>
> > When will the hypervisor ever stop accessing this?
>
> KVM always access it for the vcpus that are getting scheduled out or
> scheduled in on the host side.

I was more hinting at whether there was a way to disable this at
runtime. Think of a guest using kexec, for example, where you really
don't want the hypervisor to start messing with memory that has been
since reallocated by the guest.

> > How does it work across reset?
>
> I need to figure out what happens during reset/migration in the first
> place.

Yup.

M.

--
Without deviation from the norm, progress is not possible.

\
 
 \ /
  Last update: 2021-07-21 12:26    [W:0.056 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site