lkml.org 
[lkml]   [2022]   [Jun]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v4 1/1] Allow non-extending parallel direct writes on the same file.
From


On 6/7/22 23:25, Vivek Goyal wrote:
> On Sun, Jun 05, 2022 at 12:52:00PM +0530, Dharmendra Singh wrote:
>> From: Dharmendra Singh <dsingh@ddn.com>
>>
>> In general, as of now, in FUSE, direct writes on the same file are
>> serialized over inode lock i.e we hold inode lock for the full duration
>> of the write request. I could not found in fuse code a comment which
>> clearly explains why this exclusive lock is taken for direct writes.
>>
>> Following might be the reasons for acquiring exclusive lock but not
>> limited to
>> 1) Our guess is some USER space fuse implementations might be relying
>> on this lock for seralization.
>
> Hi Dharmendra,
>
> I will just try to be devil's advocate. So if this is server side
> limitation, then it is possible that fuse client's isize data in
> cache is stale. For example, filesystem is shared between two
> clients.
>
> - File size is 4G as seen by client A.
> - Client B truncates the file to 2G.
> - Two processes in client A, try to do parallel direct writes and will
> be able to proceed and server will get two parallel writes both
> extending file size.
>
> I can see that this can happen with virtiofs with cache=auto policy.
>
> IOW, if this is a fuse server side limitation, then how do you ensure
> that fuse kernel's i_size definition is not stale.

Hi Vivek,

I'm sorry, to be sure, can you explain where exactly a client is located
for you? For us these are multiple daemons linked to libufse - which you
seem to call 'server' Typically these clients are on different machines.
And servers are for us on the other side of the network - like an NFS
server.

So now while I'm not sure what you mean with 'client', I'm wondering
about two generic questions

a) I need to double check, but we were under the assumption the code in
question is a direct-io code path. I assume cache=auto would use the
page cache and should not be effected?

b) How would the current lock help for distributed clients? Or multiple
fuse daemons (what you seem to call server) per local machine?

For a single vfs mount point served by fuse, truncate should take the
exclusive lock and parallel writes the shared lock - I don't see a
problem here either.


Thanks,
Bernd




\
 
 \ /
  Last update: 2022-06-08 04:55    [W:0.307 / U:0.692 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site