Messages in this thread | | | Subject | Re: [PATCH] KVM: nVMX: nested VPID emulation | From | Wanpeng Li <> | Date | Wed, 16 Sep 2015 14:10:14 +0800 |
| |
On 9/16/15 1:20 PM, Jan Kiszka wrote: > On 2015-09-16 04:36, Wanpeng Li wrote: >> On 9/16/15 1:32 AM, Jan Kiszka wrote: >>> On 2015-09-15 12:14, Wanpeng Li wrote: >>>> On 9/14/15 10:54 PM, Jan Kiszka wrote: >>>>> Last but not least: the guest can now easily exhaust the host's pool of >>>>> vpid by simply spawning plenty of VCPUs for L2, no? Is this acceptable >>>>> or should there be some limit? >>>> I reuse the value of vpid02 while vpid12 changed w/ one invvpid in v2, >>>> and the scenario which you pointed out can be avoid. >>> I cannot yet follow why there is no chance for L1 to consume all vpids >>> that the host manages in that single, global bitmap by simply spawning a >>> lot of nested VCPUs for some L2. What is enforcing L1 to call nested >>> vmclear - apparently the only way, besides destructing nested VCPUs, to >>> release such vpids again? >> In v2, there is no direct mapping between vpid02 and vpid12, the vpid02 >> is per-vCPU for L0 and reused while the value of vpid12 is changed w/ >> one invvpid during nested vmentry. The vpid12 is allocated by L1 for L2, >> so it will not influence global bitmap(for vpid01 and vpid02 allocation) >> even if spawn a lot of nested vCPUs. > Ah, I see, you limit allocation to one additional host-side vpid per > VCPU, for nesting. That looks better. That also means all vpids for L2 > will be folded on that single vpid in hardware, right? So the major
Exactly.
> benefit comes from having separate vpids when switching between L1 and > L2, in fact.
And also when L2's vCPUs not sched in/out on L1. Btw, your review of v3 is a great appreciated. :-)
Regards, Wanpeng Li
| |