lkml.org 
[lkml]   [2018]   [Nov]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.14 092/222] MD: fix invalid stored role for a disk
    Date
    4.14-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Shaohua Li <shli@fb.com>

    [ Upstream commit d595567dc4f0c1d90685ec1e2e296e2cad2643ac ]

    If we change the number of array's device after device is removed from array,
    then add the device back to array, we can see that device is added as active
    role instead of spare which we expected.

    Please see the below link for details:
    https://marc.info/?l=linux-raid&m=153736982015076&w=2

    This is caused by that we prefer to use device's previous role which is
    recorded by saved_raid_disk, but we should respect the new number of
    conf->raid_disks since it could be changed after device is removed.

    Reported-by: Gioh Kim <gi-oh.kim@profitbricks.com>
    Tested-by: Gioh Kim <gi-oh.kim@profitbricks.com>
    Acked-by: Guoqing Jiang <gqjiang@suse.com>
    Signed-off-by: Shaohua Li <shli@fb.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    drivers/md/md.c | 4 ++++
    1 file changed, 4 insertions(+)

    --- a/drivers/md/md.c
    +++ b/drivers/md/md.c
    @@ -1766,6 +1766,10 @@ static int super_1_validate(struct mddev
    } else
    set_bit(In_sync, &rdev->flags);
    rdev->raid_disk = role;
    + if (role >= mddev->raid_disks) {
    + rdev->saved_raid_disk = -1;
    + rdev->raid_disk = -1;
    + }
    break;
    }
    if (sb->devflags & WriteMostly1)

    \
     
     \ /
      Last update: 2018-11-12 00:02    [W:4.038 / U:0.212 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site