lkml.org 
[lkml]   [2021]   [May]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [Resend RFC PATCH V2 10/12] HV/IOMMU: Add Hyper-V dma ops support
From
Date
On 2021-05-12 17:01, Tianyu Lan wrote:
> Hi Christoph and Konrad:
>      Current Swiotlb bounce buffer uses a pool for all devices. There
> is a high overhead to get or free bounce buffer during performance test.
> Swiotlb code now use a global spin lock to protect bounce buffer data.
> Several device queues try to acquire the spin lock and this introduce
> additional overhead.
>
> For performance and security perspective, each devices should have a
> separate swiotlb bounce buffer pool and so this part needs to rework.
> I want to check this is right way to resolve performance issues with
> swiotlb bounce buffer. If you have some other suggestions,welcome.

We're already well on the way to factoring out SWIOTLB to allow for just
this sort of more flexible usage like per-device bounce pools - see here:

https://lore.kernel.org/linux-iommu/20210510095026.3477496-1-tientzu@chromium.org/T/#t

FWIW this looks to have an awful lot in common with what's going to be
needed for Android's protected KVM and Arm's Confidential Compute
Architecture, so we'll all be better off by doing it right. I'm getting
the feeling that set_memory_decrypted() wants to grow into a more
general abstraction that can return an alias at a different GPA if
necessary.

Robin.

>
> Thanks.
>
> On 4/14/2021 11:47 PM, Christoph Hellwig wrote:
>>> +static dma_addr_t hyperv_map_page(struct device *dev, struct page
>>> *page,
>>> +                  unsigned long offset, size_t size,
>>> +                  enum dma_data_direction dir,
>>> +                  unsigned long attrs)
>>> +{
>>> +    phys_addr_t map, phys = (page_to_pfn(page) << PAGE_SHIFT) + offset;
>>> +
>>> +    if (!hv_is_isolation_supported())
>>> +        return phys;
>>> +
>>> +    map = swiotlb_tbl_map_single(dev, phys, size, HV_HYP_PAGE_SIZE,
>>> dir,
>>> +                     attrs);
>>> +    if (map == (phys_addr_t)DMA_MAPPING_ERROR)
>>> +        return DMA_MAPPING_ERROR;
>>> +
>>> +    return map;
>>> +}
>>
>> This largerly duplicates what dma-direct + swiotlb does.  Please use
>> force_dma_unencrypted to force bounce buffering and just use the generic
>> code.
>>
>>> +    if (hv_isolation_type_snp()) {
>>> +        ret = hv_set_mem_host_visibility(
>>> +                phys_to_virt(hyperv_io_tlb_start),
>>> +                hyperv_io_tlb_size,
>>> +                VMBUS_PAGE_VISIBLE_READ_WRITE);
>>> +        if (ret)
>>> +            panic("%s: Fail to mark Hyper-v swiotlb buffer visible
>>> to host. err=%d\n",
>>> +                  __func__, ret);
>>> +
>>> +        hyperv_io_tlb_remap = ioremap_cache(hyperv_io_tlb_start
>>> +                        + ms_hyperv.shared_gpa_boundary,
>>> +                            hyperv_io_tlb_size);
>>> +        if (!hyperv_io_tlb_remap)
>>> +            panic("%s: Fail to remap io tlb.\n", __func__);
>>> +
>>> +        memset(hyperv_io_tlb_remap, 0x00, hyperv_io_tlb_size);
>>> +        swiotlb_set_bounce_remap(hyperv_io_tlb_remap);
>>
>> And this really needs to go into a common hook where we currently just
>> call set_memory_decrypted so that all the different schemes for these
>> trusted VMs (we have about half a dozen now) can share code rather than
>> reinventing it.
>>

\
 
 \ /
  Last update: 2021-05-12 23:32    [W:0.068 / U:0.668 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site