lkml.org 
[lkml]   [2022]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v5 00/24] Userspace P2PDMA with O_DIRECT NVMe devices


On 2022-01-31 11:56 a.m., Jonathan Derrick wrote:
>> This is relatively straightforward, however the one significant
>> problem is that, presently, pci_p2pdma_map_sg() requires a homogeneous
>> SGL with all P2PDMA pages or all regular pages. Enhancing GUP to
>> support enforcing this rule would require a huge hack that I don't
>> expect would be all that pallatable. So patches 3 to 16 add
>> support for P2PDMA pages to dma_map_sg[table]() to the dma-direct
>> and dma-iommu implementations. Thus systems without an IOMMU plus
>> Intel and AMD IOMMUs are supported. (Other IOMMU implementations would
>> then be unsupported, notably ARM and PowerPC but support would be added
>> when they convert to dma-iommu).
> Am I understanding that an IO may use a mix of p2pdma and system pages?
> Would that cause inconsistent latencies?

Yes, that certainly would be a possibility. People developing
applications that do such mixing would have to weight that issue if
latency is something they care about.

But it's counter productive and causes other difficulties for the kernel
to enforce only homogenous IO.

Logan

\
 
 \ /
  Last update: 2022-01-31 20:01    [W:0.191 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site