lkml.org 
[lkml]   [2013]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC V10 15/18] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
On 07/17/2013 08:14 PM, Gleb Natapov wrote:
> On Wed, Jul 17, 2013 at 07:43:01PM +0530, Raghavendra K T wrote:
>> On 07/17/2013 06:55 PM, Gleb Natapov wrote:
>>> On Wed, Jul 17, 2013 at 06:25:05PM +0530, Raghavendra K T wrote:
>>>> On 07/17/2013 06:15 PM, Gleb Natapov wrote:
>>>>> On Wed, Jul 17, 2013 at 03:35:37PM +0530, Raghavendra K T wrote:
>>>>>>>> Instead of halt we started with a sleep hypercall in those
>>>>>>>> versions. Changed to halt() once Avi suggested to reuse existing sleep.
>>>>>>>>
>>>>>>>> If we use older hypercall with few changes like below:
>>>>>>>>
>>>>>>>> kvm_pv_wait_for_kick_op(flags, vcpu, w->lock )
>>>>>>>> {
>>>>>>>> // a0 reserved for flags
>>>>>>>> if (!w->lock)
>>>>>>>> return;
>>>>>>>> DEFINE_WAIT
>>>>>>>> ...
>>>>>>>> end_wait
>>>>>>>> }
>>>>>>>>
>>>>>>> How would this help if NMI takes lock in critical section. The thing
>>>>>>> that may happen is that lock_waiting->want may have NMI lock value, but
>>>>>>> lock_waiting->lock will point to non NMI lock. Setting of want and lock
>>>>>>> have to be atomic.
>>>>>>
>>>>>> True. so we are here
>>>>>>
>>>>>> non NMI lock(a)
>>>>>> w->lock = NULL;
>>>>>> smp_wmb();
>>>>>> w->want = want;
>>>>>> NMI
>>>>>> <---------------------
>>>>>> NMI lock(b)
>>>>>> w->lock = NULL;
>>>>>> smp_wmb();
>>>>>> w->want = want;
>>>>>> smp_wmb();
>>>>>> w->lock = lock;
>>>>>> ---------------------->
>>>>>> smp_wmb();
>>>>>> w->lock = lock;
>>>>>>
>>>>>> so how about fixing like this?
>>>>>>
>>>>>> again:
>>>>>> w->lock = NULL;
>>>>>> smp_wmb();
>>>>>> w->want = want;
>>>>>> smp_wmb();
>>>>>> w->lock = lock;
>>>>>>
>>>>>> if (!lock || w->want != want) goto again;
>>>>>>
>>>>> NMI can happen after the if() but before halt and the same situation
>>>>> we are trying to prevent with IRQs will occur.
>>>>
>>>> True, we can not fix that. I thought to fix the inconsistency of
>>>> lock,want pair.
>>>> But NMI could happen after the first OR condition also.
>>>> /me thinks again
>>>>
>>> lock_spinning() can check that it is called in nmi context and bail out.
>>
>> Good point.
>> I think we can check for even irq context and bailout so that in irq
>> context we continue spinning instead of slowpath. no ?
>>
> That will happen much more often and irq context is no a problem anyway.
>

Yes. It is not a problem. But my idea was to not to enter slowpath lock
during irq processing. Do you think that is a good idea?

I 'll now experiment how often we enter slowpath in irq context.

>>> How often this will happens anyway.
>>>
>>
>> I know NMIs occur frequently with watchdogs. or used by sysrq-trigger
>> etc.. But I am not an expert how frequent it is otherwise. But even
>> then if they do not use spinlock, we have no problem as already pointed.
>>
>> I can measure with debugfs counter how often it happens.
>>
> When you run perf you will see a lot of NMIs, but those should not take
> any locks.

Yes. I just verified that with benchmark runs, and with perf running,
there was not even a single nmi hitting the lock_spinning.




\
 
 \ /
  Last update: 2013-07-17 17:42    [W:0.085 / U:0.908 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site