lkml.org 
[lkml]   [2019]   [Mar]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH 1/3] PCI: hv: Fix a memory leak in hv_eject_device_work()
Date
> From: Michael Kelley <mikelley@microsoft.com>
> Sent: Wednesday, March 20, 2019 2:38 PM
>
> From: Dexuan Cui <decui@microsoft.com>
> >
> > After a device is just created in new_pcichild_device(), hpdev->refs is set
> > to 2 (i.e. the initial value of 1 plus the get_pcichild()).
> >
> > When we hot remove the device from the host, in Linux VM we first call
> > hv_pci_eject_device(), which increases hpdev->refs by get_pcichild() and
> > then schedules a work of hv_eject_device_work(), so hpdev->refs becomes 3
> > (let's ignore the paired get/put_pcichild() in other places). But in
> > hv_eject_device_work(), currently we only call put_pcichild() twice,
> > meaning the 'hpdev' struct can't be freed in put_pcichild(). This patch
> > adds one put_pcichild() to fix the memory leak.
> >
> > BTW, the device can also be removed when we run "rmmod pci-hyperv". On
> this
> > path (hv_pci_remove() -> hv_pci_bus_exit() -> hv_pci_devices_present()),
> > hpdev->refs is 2, and we do correctly call put_pcichild() twice in
> > pci_devices_present_work().
>
> Exiting new_pcichild_device() with hpdev->refs set to 2 seems OK to me.
> There is the reference in the hbus->children list, and there is the reference that
> is returned to the caller.
So IMO the "normal" reference count should be 2. :-) IMO only when a hv_pci_dev
device is about to be destroyed, its reference count can drop to less than 2,
i.e. first temporarily drop to 1 (meaning the hv_pci_dev device is removed from
hbus->children), and then drop to zero (meaning kfree(hpdev) is called).

> But what is strange is that pci_devices_present_work()
> overwrites the reference returned in local variable hpdev without doing a
> put_pcichild().
I suppose you mean:

/* First, mark all existing children as reported missing. */
spin_lock_irqsave(&hbus->device_list_lock, flags);
list_for_each_entry(hpdev, &hbus->children, list_entry) {
hpdev->reported_missing = true;
}
spin_unlock_irqrestore(&hbus->device_list_lock, flags)

This is not strange to me, because, in pci_devices_present_work(), at first we
don't know which devices are about to disappear, so we pre-mark all devices to
be potentially missing like that; if a device is still on the bus, we'll mark its
hpdev->reported_missing to false later; only after we know exactly which
devices are missing, we should call put_pcichild() against them. All these
seem natural to me.

> It seems like the "normal" reference count should be 1 when the
> child device is not being manipulated, not 2.
What does "not being manipulated" mean?

> The fix would be to add a call to
> put_pcichild() when the return value from new_pcichild_device() is
> overwritten.
In pci_devices_present_work(), we NEVER "overwrite" the "hpdev" returned
from new_pcichild_device(): the "reported_missing" field of the new hpdev
is implicitly initialized to false in new_pcichild_device().

> Then remove the call to put_pcichild() in pci_device_present_work() when
> missing
> children are moved to the local list. The children have been moved from one
> list
> to another, so there's no need to decrement the reference count. Then when
> everything in the local list is deleted, the reference is correctly decremented,
> presumably freeing the memory.
>
> With this approach, the code in hv_eject_device_work() is correct. There's
> one call to put_pcichild() to reflect removing the child device from the hbus->
> children list, and one call to put_pcichild() to pair with the get_pcichild() in
> hv_pci_eject_device().
Please refer to my replies above. IMO we should fix
hv_eject_device_work() rather than pci_devices_present_work().

Thanks
-- Dexuan

> Your patch works, but to me it leaves the ref count in an unnatural state
> most of the time.
>
> Michael

\
 
 \ /
  Last update: 2019-03-21 01:13    [W:1.953 / U:0.096 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site