lkml.org 
[lkml]   [2022]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [RFC PATCH rdma-next 08/10] RDMA/rxe: Implement flush execution in responder side
Date
Hi Jason


On 07/01/2022 01:33, Jason Gunthorpe wrote:
> On Thu, Jan 06, 2022 at 06:42:57AM +0000, lizhijian@fujitsu.com wrote:
>>
>> On 06/01/2022 08:28, Jason Gunthorpe wrote:
>>> On Tue, Dec 28, 2021 at 04:07:15PM +0800, Li Zhijian wrote:
>>>> + while (length > 0) {
>>>> + va = (u8 *)(uintptr_t)buf->addr + offset;
>>>> + bytes = buf->size - offset;
>>>> +
>>>> + if (bytes > length)
>>>> + bytes = length;
>>>> +
>>>> + arch_wb_cache_pmem(va, bytes);
>>> So why did we need to check that the va was pmem to call this?
>> Sorry, i didn't get you.
>>
>> I didn't check whether va is pmem, since only MR registered with PERSISTENCE(only pmem can
>> register this access flag) can reach here.
> Yes, that is what I mean,

I'm not sure I understand the *check* you mentioned above.

Current code just dose something like:

if (!sanity_check())
    return;
if (requested_plt == PERSISTENCE)
    va = iova_to_va(iova);
    arch_wb_cache_pmem(va, bytes);
    wmb;
else if (requested_plt == GLOBAL_VISIBILITY)
    wmb();


> why did we need to check anything to call
> this API
As above pseudo code,  it didn't *check* anything as what you said i think.


> - it should work on any CPU mapped address.
Of course, arch_wb_cache_pmem(va, bytes) works on CPU mapped address backing to both dimm and nvdimm,
but not a iova that could refers to user space address.


Thanks
Zhijian



>
> Jason
\
 
 \ /
  Last update: 2022-01-10 06:46    [W:0.099 / U:2.724 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site