lkml.org 
[lkml]   [2019]   [Jul]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: WARNING in __mmdrop
From
Date

On 2019/7/21 下午6:02, Michael S. Tsirkin wrote:
> On Sat, Jul 20, 2019 at 03:08:00AM -0700, syzbot wrote:
>> syzbot has bisected this bug to:
>>
>> commit 7f466032dc9e5a61217f22ea34b2df932786bbfc
>> Author: Jason Wang <jasowang@redhat.com>
>> Date: Fri May 24 08:12:18 2019 +0000
>>
>> vhost: access vq metadata through kernel virtual address
>>
>> bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=149a8a20600000
>> start commit: 6d21a41b Add linux-next specific files for 20190718
>> git tree: linux-next
>> final crash: https://syzkaller.appspot.com/x/report.txt?x=169a8a20600000
>> console output: https://syzkaller.appspot.com/x/log.txt?x=129a8a20600000
>> kernel config: https://syzkaller.appspot.com/x/.config?x=3430a151e1452331
>> dashboard link: https://syzkaller.appspot.com/bug?extid=e58112d71f77113ddb7b
>> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10139e68600000
>>
>> Reported-by: syzbot+e58112d71f77113ddb7b@syzkaller.appspotmail.com
>> Fixes: 7f466032dc9e ("vhost: access vq metadata through kernel virtual
>> address")
>>
>> For information about bisection process see: https://goo.gl/tpsmEJ#bisection
>
> OK I poked at this for a bit, I see several things that
> we need to fix, though I'm not yet sure it's the reason for
> the failures:
>
>
> 1. mmu_notifier_register shouldn't be called from vhost_vring_set_num_addr
> That's just a bad hack,


This is used to avoid holding lock when checking whether the addresses
are overlapped. Otherwise we need to take spinlock for each invalidation
request even if it was the va range that is not interested for us. This
will be very slow e.g during guest boot.


> in particular I don't think device
> mutex is taken and so poking at two VQs will corrupt
> memory.


The caller vhost_net_ioctl() (or scsi and vsock) will hold device mutex
before calling us.


> So what to do? How about a per vq notifier?
> Of course we also have synchronize_rcu
> in the notifier which is slow and is now going to be called twice.
> I think call_rcu would be more appropriate here.
> We then need rcu_barrier on module unload.


So this seems unnecessary.


> OTOH if we make pages linear with map then we are good
> with kfree_rcu which is even nicer.


It could be an optimization on top.


>
> 2. Doesn't map leak after vhost_map_unprefetch?
> And why does it poke at contents of the map?
> No one should use it right?


Yes, it's not hard to fix just kfree map in this function.


>
> 3. notifier unregister happens last in vhost_dev_cleanup,
> but register happens first. This looks wrong to me.


I'm not sure I get the the exact issue here.


>
> 4. OK so we use the invalidate count to try and detect that
> some invalidate is in progress.
> I am not 100% sure why do we care.
> Assuming we do, uaddr can change between start and end
> and then the counter can get negative, or generally
> out of sync.


Yes, so the fix is as simple as zero the invalidate_count after
unregister  the mmu notifier in vhost_set_vring_num_addr().


>
> So what to do about all this?
> I am inclined to say let's just drop the uaddr optimization
> for now. E.g. kvm invalidates unconditionally.
> 3 should be fixed independently.


Maybe it's better to try to fix with the exist uaddr optimization first.

I did spot two other issues:

1) we don't check the return value mmu_register in vhost_set_vring_num()

2) we try to setup vq address even if set_vring_addr() fail


For the bug it self, it looks to me that the mm refcount was messed up
since we try to register and unregister MMU notifier. But I haven't
figured out why, will do more investigation.

Thanks


>
>

\
 
 \ /
  Last update: 2019-07-22 07:22    [W:0.169 / U:0.652 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site