lkml.org 
[lkml]   [2014]   [Jan]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: Terrible performance of sequential O_DIRECT 4k writes in SAN environment. ~3 times slower then Solars 10 with the same HBA/Storage.
Hi Christoph,

On 7 January 2014 17:58, Christoph Hellwig <hch@infradead.org> wrote:
> On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
>> This is likely a problem of Linux direct IO implementation. The thing is
>> that in Linux when you are doing appending direct IO (i.e., direct IO which
>> changes file size), the IO is performed synchronously so that we have our
>> life simpler with inode size update etc. (and frankly our current locking
>> rules make inode size update on IO completion almost impossible). Since
>> appending direct IO isn't very common, we seem to get away with this
>> simplification just fine...
>
> Shouldn't be too much of a problem at least for XFS and maybe even ext4
> with the workqueue based I/O end handler. For XFS we protect size
> updates by the ilock which we already taken in that handler, not sure
> what ext4 would do there.
>

Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
However I have tried XFS as well. It was a bit slower than ext4 on all
occasions.
On the same machine results for XFS were:

13.97Mb/sec 3576..27 Requests/sec

/dev/mapper/mpathc on /mnt/xfs type xfs (rw,noatime,nodiratime,nobarrier)


\
 
 \ /
  Last update: 2014-01-07 20:01    [W:0.138 / U:1.612 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site