lkml.org 
[lkml]   [2008]   [Jan]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [kvm-devel] [PATCH] export notifier #1
Hi Kraxel,

On Wed, Jan 23, 2008 at 01:51:23PM +0100, Gerd Hoffmann wrote:
> That would render the notifies useless for Xen too. Xen needs to
> intercept the actual pte clear and instead of just zapping it use the
> hypercall to do the unmap and release the grant.

I think it has yet to be demonstrated that doing the invalidate
_before_ clearing the linux pte is workable at all for
shadow-pte/RDMA. Infact even doing it _after_ still requires some form
of serialization but it's less obviously broken and perhaps more
fixable unlike doing it before that seems hardly fixable given the
refill event running in the remote node is supposed to wait on a
bitflag of a page in the master node to return ON. What Christoph
didn't specify after hinting you have to wait for the PageExported
bitflag to return on, is that such page may be back in the freelist by
the time the secondary-tlb page fault starts checking that bit. And
nobody is setting that bit anyway in the VM so good luck waiting that
bit to return on in a page in the freelist (nothing keeps that page
pinned anymore by the time the ->invalidate_page returns: the whole
point of the invalidate is so the VM code can finally free it and put
it in the freelist).

Until there's some more reasonable theory of how invalidating the
remote tlbs/ptes _before_ the main linux pte can remotely work, I'm
"quite" skeptical it's the way to go for the invalidate_page callback.

Like Avi said, Xen is dealing with the linux pte only, so there's no
racy smp page fault to serialize against. Perhaps we can add another
notifier for Xen though.

But I think it's still not enough for Xen to have a method called
before the ptep_clear_flush: rmap.c would get confused in
page_mkclean_one for example. It might be possible that vm_ops is the
right way for you even if it further clutters the VM. Like Avi pointed
me out once, with our current mmu_notifiers we can export the KVM
address space with remote dma and keep swapping the whole KVM asset
just fine despite the triple MMU running the system (qemu using linux
pte, KVM using spte, quadrics using pcimmu). And the core Linux VM
code (not some obscure hypervisor) will deal with all aging and VM
issues like a normal task (especially with my last patch that reflects
the accessed bitflag in the spte the same way the accessed bitflag is
reflected for the regular ptes).

Nevertheless if you've any idea on how to use the notifiers for Xen
I'd be glad to help. Perhaps one workable way to change my patch to
work for you could be to pass the retval of ptep_clear_flush to the
notifiers themself. something like:

#define ptep_clear_flush(__vma, __address, __ptep) \
({ \
pte_t __pte; \
__pte = ptep_get_and_clear((__vma)->vm_mm, __address, __ptep); \
flush_tlb_page(__vma, __address); \
__pte = mmu_notifier(invalidate_page, (__vma)->vm_mm, __address, __pte, __ptep); \
__pte; \
})

But this would kind of need an exclusive registration or this loop
wouldn't work well if everyone pretends to overwrite the memory
pointed by __ptep with its own value calculated in function of __pte.

for_each_notifier(mn,mm)
mn->invalidate_page(mm, __address, __pte, __ptep);

You could get a pte_none page fault in between the old value and the
new value though. (but you wouldn't need to flush the tlb inside
invalidate_pte, only us need to flush the secondary tlb for the spte
inside the invalidate_page obviously)

Let me know if you're interested in the above.

Thanks!



\
 
 \ /
  Last update: 2008-01-23 16:43    [W:1.315 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site