lkml.org 
[lkml]   [2012]   [Aug]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: O_DIRECT to md raid 6 is slow
On 20/08/2012 02:01, NeilBrown wrote:
> On Sun, 19 Aug 2012 18:34:28 -0500 Stan Hoeppner <stan@hardwarefreak.com>
> wrote:
>
>
> Since we are trying to set the record straight....
>
>> md/RAID6 must read all devices in a RMW cycle.
>
> md/RAID6 must read all data devices (i.e. not parity devices) which it is not
> going to write to, in an RWM cycle (which the code actually calls RCW -
> reconstruct-write).
>
>>
>> md/RAID5 takes a shortcut for single block writes, and must only read
>> one drive for the RMW cycle.
>
> md/RAID5 uses an alternate mechanism when the number of data blocks that need
> to be written is less than half the number of data blocks in a stripe. In
> this alternate mechansim (which the code calls RMW - read-modify-write),
> md/RAID5 reads all the blocks that it is about to write to, plus the parity
> block. It then computes the new parity and writes it out along with the new
> data.
>

I've learned something here too - I thought this mechanism was only used
for a single block write. Thanks for the correction, Neil.

If you (or anyone else) are ever interested in implementing the same
thing in raid6, the maths is not actually too bad (now that I've thought
about it). (I understand the theory here, but I'm afraid I don't have
the experience with kernel programming to do the implementation.)

To change a few data blocks, you need to read in the old data blocks
(Da, Db, etc.) and the old parities (P, Q).

Calculate the xor differences Xa = Da + D'a, Xb = Db + D'b, etc.

The new P parity is P' = P + Xa + Xb +...

The new Q parity is Q' = P + (g^a).Xa + (g^b).Xb + ...
The power series there is just the normal raid6 Q-parity calculation
with most entries set to 0, and the Xa, Xb, etc. in the appropriate spots.

If the raid6 Q-parity function already has short-cuts for handling zero
entries (I haven't looked, but the mechanism might be in place to
slightly speed up dual-failure recovery), then all the blocks are in place.




\
 
 \ /
  Last update: 2012-08-20 12:01    [W:1.346 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site