lkml.org 
[lkml]   [2020]   [May]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 01/10] swiotlb-xen: use vmalloc_to_page on vmalloc virt addresses
From
Date
Hi Stefano,

On 22/05/2020 04:54, Stefano Stabellini wrote:
> On Thu, 21 May 2020, Julien Grall wrote:
>> Hi,
>>
>> On 21/05/2020 00:45, Stefano Stabellini wrote:
>>> From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>
>>> Don't just assume that virt_to_page works on all virtual addresses.
>>> Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
>>> virt addresses.
>>
>> Can you provide an example where swiotlb is used with vmalloc()?
>
> The issue was reported here happening on the Rasperry Pi 4:
> https://marc.info/?l=xen-devel&m=158862573216800

Thanks, it would be good if the commit message contains a bit more details.

>
> If you are asking where in the Linux codebase the vmalloc is happening
> specifically, I don't know for sure, my information is limited to the
> stack trace that you see in the link (I don't have a Rasperry Pi 4 yet
> but I shall have one soon.)

Looking at the code there is a comment in xen_swiotlb_alloc_coherent()
suggesting that xen_alloc_coherent_pages() may return an ioremap'ped
region on Arm. So it feels to me that commit
b877ac9815a8fe7e5f6d7fdde3dc34652408840a "xen/swiotlb: remember having
called xen_create_contiguous_region()" has always been broken on Arm.

As an aside, your commit message also suggests this is an issue for
every virtual address used in swiotlb. But only the virt_to_page() call
in xen_swiotlb_free_coherent() is modified. Is it intended? If yes, I
think you want to update your commit message.

>
>
>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>> ---
>>> drivers/xen/swiotlb-xen.c | 5 ++++-
>>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>> index b6d27762c6f8..a42129cba36e 100644
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
>>> size, void *vaddr,
>>> int order = get_order(size);
>>> phys_addr_t phys;
>>> u64 dma_mask = DMA_BIT_MASK(32);
>>> + struct page *pg;
>>> if (hwdev && hwdev->coherent_dma_mask)
>>> dma_mask = hwdev->coherent_dma_mask;
>>> @@ -346,9 +347,11 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
>>> size, void *vaddr,
>>> /* Convert the size to actually allocated. */
>>> size = 1UL << (order + XEN_PAGE_SHIFT);
>>> + pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
>>> + virt_to_page(vaddr);
>>
>> Common DMA code seems to protect this check with CONFIG_DMA_REMAP. Is it
>> something we want to do it here as well? Or is there any other condition where
>> vmalloc can happen?
>
> I can see it in dma_direct_free_pages:
>
> if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr))
> vunmap(cpu_addr);
>
> I wonder why the common DMA code does that. is_vmalloc_addr should work
> regardless of CONFIG_DMA_REMAP. Maybe just for efficiency?
is_vmalloc_addr() doesn't looks very expensive (although it is not an
inline function). So I am not sure the reason behind it. It might be
worth asking the author of the config.

Cheers,

--
Julien Grall

\
 
 \ /
  Last update: 2020-05-22 20:11    [W:0.348 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site