lkml.org 
[lkml]   [2014]   [Oct]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
On 10/27/2014 05:22 PM, Waiman Long wrote:
> On 10/27/2014 02:04 PM, Peter Zijlstra wrote:
>> On Mon, Oct 27, 2014 at 01:38:20PM -0400, Waiman Long wrote:
>>> On 10/24/2014 04:54 AM, Peter Zijlstra wrote:
>>>> On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long wrote:
>>>>
>>>>> Since enabling paravirt spinlock will disable unlock function
>>>>> inlining,
>>>>> a jump label can be added to the unlock function without adding patch
>>>>> sites all over the kernel.
>>>> But you don't have to. My patches allowed for the inline to remain,
>>>> again reducing the overhead of enabling PV spinlocks while running
>>>> on a
>>>> real machine.
>>>>
>>>> Look at:
>>>>
>>>> http://lkml.kernel.org/r/20140615130154.213923590@chello.nl
>>>>
>>>> In particular this hunk:
>>>>
>>>> Index: linux-2.6/arch/x86/kernel/paravirt_patch_64.c
>>>> ===================================================================
>>>> --- linux-2.6.orig/arch/x86/kernel/paravirt_patch_64.c
>>>> +++ linux-2.6/arch/x86/kernel/paravirt_patch_64.c
>>>> @@ -22,6 +22,10 @@ DEF_NATIVE(pv_cpu_ops, swapgs, "swapgs")
>>>> DEF_NATIVE(, mov32, "mov %edi, %eax");
>>>> DEF_NATIVE(, mov64, "mov %rdi, %rax");
>>>>
>>>> +#if defined(CONFIG_PARAVIRT_SPINLOCKS)&&
>>>> defined(CONFIG_QUEUE_SPINLOCK)
>>>> +DEF_NATIVE(pv_lock_ops, queue_unlock, "movb $0, (%rdi)");
>>>> +#endif
>>>> +
>>>> unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len)
>>>> {
>>>> return paravirt_patch_insns(insnbuf, len,
>>>> @@ -61,6 +65,9 @@ unsigned native_patch(u8 type, u16 clobb
>>>> PATCH_SITE(pv_cpu_ops, clts);
>>>> PATCH_SITE(pv_mmu_ops, flush_tlb_single);
>>>> PATCH_SITE(pv_cpu_ops, wbinvd);
>>>> +#if defined(CONFIG_PARAVIRT_SPINLOCKS)&&
>>>> defined(CONFIG_QUEUE_SPINLOCK)
>>>> + PATCH_SITE(pv_lock_ops, queue_unlock);
>>>> +#endif
>>>>
>>>> patch_site:
>>>> ret = paravirt_patch_insns(ibuf, len, start, end);
>>>>
>>>>
>>>> That makes sure to overwrite the callee-saved call to the
>>>> pv_lock_ops::queue_unlock with the immediate asm "movb $0, (%rdi)".
>>>>
>>>>
>>>> Therefore you can retain the inlined unlock with hardly (there
>>>> might be
>>>> some NOP padding) any overhead at all. On PV it reverts to a callee
>>>> saved function call.
>>> My concern is that spin_unlock() can be called in many places,
>>> including
>>> loadable kernel modules. Can the paravirt_patch_ident_32() function
>>> able to
>>> patch all of them in reasonable time? How about a kernel module
>>> loaded later
>>> at run time?
>> modules should be fine, see arch/x86/kernel/module.c:module_finalize()
>> -> apply_paravirt().
>>
>> Also note that the 'default' text is an indirect call into the paravirt
>> ops table which routes to the 'right' function, so even if the text
>> patching would be 'late' calls would 'work' as expected, just slower.
>
> Thanks for letting me know about that. I have this concern because
> your patch didn't change the current configuration of disabling unlock
> inlining when paravirt_spinlock is enabled. With that, I think it is
> worthwhile to reduce the performance delta between the PV and non-PV
> kernel on bare metal.

I am sorry that the unlock call sites patching code doesn't work in a
virtual guest. Your pvqspinlock patch did an unconditional patching even
in a virtual guest. I added check for the paravirt_spinlocks_enabled,
but it turned out that some spin_unlock() seemed to be called before
paravirt_spinlocks_enabled is set. As a result, some call sites were
still patched resulting in missed wake up's and system hang.

At this point, I am going to leave out that change from my patch set
until we can figure out a better way of doing that.

-Longman


\
 
 \ /
  Last update: 2014-10-29 20:41    [W:0.080 / U:0.520 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site