lkml.org 
[lkml]   [2020]   [Aug]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2] virtio_ring: use alloc_pages_node for NUMA-aware allocation
From
Date
Hi Michael & Bjorn,

Sorry for the ping,
but how about this patch/issue? any comments/suggestions?

Thanks!

On 2020/7/27 21:10, Shile Zhang wrote:
>
>
> On 2020/7/21 19:28, Shile Zhang wrote:
>>
>>
>> On 2020/7/21 16:18, Michael S. Tsirkin wrote:
>>> On Tue, Jul 21, 2020 at 03:00:13PM +0800, Shile Zhang wrote:
>>>> Use alloc_pages_node() allocate memory for vring queue with proper
>>>> NUMA affinity.
>>>>
>>>> Reported-by: kernel test robot <lkp@intel.com>
>>>> Suggested-by: Jiang Liu <liuj97@gmail.com>
>>>> Signed-off-by: Shile Zhang <shile.zhang@linux.alibaba.com>
>>>
>>> Do you observe any performance gains from this patch?
>>
>> Thanks for your comments!
>> Yes, the bandwidth can boost more than doubled (from 30Gbps to 80GBps)
>> with this changes in my test env (8 numa nodes), with netperf test.
>>
>>>
>>> I also wonder why isn't the probe code run on the correct numa node?
>>> That would fix a wide class of issues like this without need to tweak
>>> drivers.
>>
>> Good point, I'll check this, thanks!
>
> Sorry, I have no idea about how the probe code to grab the appropriate
> NUMA node.
>
>>
>>>
>>> Bjorn, what do you think? Was this considered?
>
> Hi Bjorn, Could you please give any comments about this issue?
> Thanks!
>
>>>
>>>> ---
>>>> Changelog
>>>> v1 -> v2:
>>>> - fixed compile warning reported by LKP.
>>>> ---
>>>>   drivers/virtio/virtio_ring.c | 10 ++++++----
>>>>   1 file changed, 6 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/virtio/virtio_ring.c
>>>> b/drivers/virtio/virtio_ring.c
>>>> index 58b96baa8d48..d38fd6872c8c 100644
>>>> --- a/drivers/virtio/virtio_ring.c
>>>> +++ b/drivers/virtio/virtio_ring.c
>>>> @@ -276,9 +276,11 @@ static void *vring_alloc_queue(struct
>>>> virtio_device *vdev, size_t size,
>>>>           return dma_alloc_coherent(vdev->dev.parent, size,
>>>>                         dma_handle, flag);
>>>>       } else {
>>>> -        void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
>>>> -
>>>> -        if (queue) {
>>>> +        void *queue = NULL;
>>>> +        struct page *page =
>>>> alloc_pages_node(dev_to_node(vdev->dev.parent),
>>>> +                             flag, get_order(size));
>>>> +        if (page) {
>>>> +            queue = page_address(page);
>>>>               phys_addr_t phys_addr = virt_to_phys(queue);
>>>>               *dma_handle = (dma_addr_t)phys_addr;
>>>> @@ -308,7 +310,7 @@ static void vring_free_queue(struct
>>>> virtio_device *vdev, size_t size,
>>>>       if (vring_use_dma_api(vdev))
>>>>           dma_free_coherent(vdev->dev.parent, size, queue, dma_handle);
>>>>       else
>>>> -        free_pages_exact(queue, PAGE_ALIGN(size));
>>>> +        free_pages((unsigned long)queue, get_order(size));
>>>>   }
>>>>   /*
>>>> --
>>>> 2.24.0.rc2

\
 
 \ /
  Last update: 2020-08-04 10:50    [W:0.044 / U:2.108 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site