lkml.org 
[lkml]   [2019]   [Jul]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: WARNING in __mmdrop
From
Date

On 2019/7/25 下午1:52, Michael S. Tsirkin wrote:
> On Tue, Jul 23, 2019 at 09:31:35PM +0800, Jason Wang wrote:
>> On 2019/7/23 下午5:26, Michael S. Tsirkin wrote:
>>> On Tue, Jul 23, 2019 at 04:49:01PM +0800, Jason Wang wrote:
>>>> On 2019/7/23 下午4:10, Michael S. Tsirkin wrote:
>>>>> On Tue, Jul 23, 2019 at 03:53:06PM +0800, Jason Wang wrote:
>>>>>> On 2019/7/23 下午3:23, Michael S. Tsirkin wrote:
>>>>>>>>> Really let's just use kfree_rcu. It's way cleaner: fire and forget.
>>>>>>>> Looks not, you need rate limit the fire as you've figured out?
>>>>>>> See the discussion that followed. Basically no, it's good enough
>>>>>>> already and is only going to be better.
>>>>>>>
>>>>>>>> And in fact,
>>>>>>>> the synchronization is not even needed, does it help if I leave a comment to
>>>>>>>> explain?
>>>>>>> Let's try to figure it out in the mail first. I'm pretty sure the
>>>>>>> current logic is wrong.
>>>>>> Here is what the code what to achieve:
>>>>>>
>>>>>> - The map was protected by RCU
>>>>>>
>>>>>> - Writers are: MMU notifier invalidation callbacks, file operations (ioctls
>>>>>> etc), meta_prefetch (datapath)
>>>>>>
>>>>>> - Readers are: memory accessor
>>>>>>
>>>>>> Writer are synchronized through mmu_lock. RCU is used to synchronized
>>>>>> between writers and readers.
>>>>>>
>>>>>> The synchronize_rcu() in vhost_reset_vq_maps() was used to synchronized it
>>>>>> with readers (memory accessors) in the path of file operations. But in this
>>>>>> case, vq->mutex was already held, this means it has been serialized with
>>>>>> memory accessor. That's why I think it could be removed safely.
>>>>>>
>>>>>> Anything I miss here?
>>>>>>
>>>>> So invalidate callbacks need to reset the map, and they do
>>>>> not have vq mutex. How can they do this and free
>>>>> the map safely? They need synchronize_rcu or kfree_rcu right?
>>>> Invalidation callbacks need but file operations (e.g ioctl) not.
>>>>
>>>>
>>>>> And I worry somewhat that synchronize_rcu in an MMU notifier
>>>>> is a problem, MMU notifiers are supposed to be quick:
>>>> Looks not, since it can allow to be blocked and lots of driver depends on
>>>> this. (E.g mmu_notifier_range_blockable()).
>>> Right, they can block. So why don't we take a VQ mutex and be
>>> done with it then? No RCU tricks.
>>
>> This is how I want to go with RFC and V1. But I end up with deadlock between
>> vq locks and some MM internal locks. So I decide to use RCU which is 100%
>> under the control of vhost.
>>
>> Thanks
> And I guess the deadlock is because GUP is taking mmu locks which are
> taken on mmu notifier path, right?


Yes, but it's not the only lock. I don't remember the details, but I can
confirm I meet issues with one or two other locks.


> How about we add a seqlock and take
> that in invalidate callbacks? We can then drop the VQ lock before GUP,
> and take it again immediately after.
>
> something like
> if (!vq_meta_mapped(vq)) {
> vq_meta_setup(&uaddrs);
> mutex_unlock(vq->mutex)
> vq_meta_map(&uaddrs);


The problem is the vq address could be changed at this time.


> mutex_lock(vq->mutex)
>
> /* recheck both sock->private_data and seqlock count. */
> if changed - bail out
> }
>
> And also requires that VQ uaddrs is defined like this:
> - writers must have both vq mutex and dev mutex
> - readers must have either vq mutex or dev mutex
>
>
> That's a big change though. For now, how about switching to a per-vq SRCU?
> That is only a little bit more expensive than RCU, and we
> can use synchronize_srcu_expedited.
>

Consider we switch to use kfree_rcu(), what's the advantage of per-vq SRCU?

Thanks

\
 
 \ /
  Last update: 2019-07-25 09:45    [W:0.093 / U:1.432 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site