lkml.org 
[lkml]   [2015]   [May]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Regression] Guest fs corruption with 'block: loop: improve performance via blk-mq'
On 5/18/2015 4:14 PM, Ming Lei wrote:
> On Tue, May 19, 2015 at 2:07 AM, santosh shilimkar
> <santosh.shilimkar@oracle.com> wrote:
>> On 5/17/2015 6:26 PM, Ming Lei wrote:
>>>
>>> Hi Santosh,
>>>
>>> Thanks for your report!
>>>
>>> On Sun, May 17, 2015 at 4:13 AM, santosh shilimkar
>>> <santosh.shilimkar@oracle.com> wrote:
>>>>
>>>> Hi Ming Lei, Jens,
>>>>
>>>> While doing few tests with recent kernels with Xen Server,
>>>> we saw guests(DOMU) disk image getting corrupted while booting it.
>>>> Strangely the issue is seen so far only with disk image over ocfs2
>>>> volume. If the same image kept on the EXT3/4 drive, no corruption
>>>> is observed. The issue is easily reproducible. You see the flurry
>>>> of errors while guest is mounting the file systems.
>>>>
>>>> After doing some debug and bisects, we zeroed down the issue with
>>>> commit "b5dd2f6 block: loop: improve performance via blk-mq". With
>>>> that commit reverted the corruption goes away.
>>>>
>>>> Some more details on the test setup:
>>>> 1. OVM(XEN) Server kernel(DOM0) upgraded to more recent kernel
>>>> which includes commit b5dd2f6. Boot the Server.
>>>> 2. On DOM0 file system create a ocfs2 volume
>>>> 3. Keep the Guest(VM) disk image on ocfs2 volume.
>>>> 4. Boot guest image. (xm create vm.cfg)
>>>
>>>
>>> I am not familiar with xen, so is the image accessed via
>>> loop block inside of guest VM? Is he loop block created
>>> in DOM0 or guest VM?
>>>
>> Guest. The Guest disk image is represented as a file by loop
>> device.
>>
>>>> 5. Observe the VM boot console log. VM itself use the EXT3 fs.
>>>> You will see errors like below and after this boot, that file
>>>> system/disk-image gets corrupted and mostly won't boot next time.
>>>
>>>
>>> OK, that means the image is corrupted by VM booting.
>>>
>> Right
>>
>> [...]
>>
>>>>
>>>> From the debug of the actual data on the disk vs what is read by
>>>> the guest VM, we suspect the *reads* are actually not going all
>>>> the way to disk and possibly returning the wrong data. Because
>>>> the actual data on ocfs2 volume at those locations seems
>>>> to be non-zero where as the guest seems to be read it as zero.
>>>
>>>
>>> Two big changes in the patchset are: 1) use blk-mq request based IO;
>>> 2) submit I/O concurrently(write vs. write is still serialized)
>>>
>>> Could you apply the patch in below link to see if it can fix the issue?
>>> BTW, this patch only removes concurrent submission.
>>>
>>> http://marc.info/?t=143093223200004&r=1&w=2
>>>
>> What kernel is this patch generated against ? It doesn't apply against
>> v4.0. Does this need the AIO/DIO conversion patches as well. Do you
>> have the dependent patch-set I can't apply it against v4.0.
>
> My fault, the patch is against -next tree, but you just need another two
> patches for applying this one:
>
> http://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git/commit/?h=for-next&id=f4aa4c7bbac6c4afdd4adccf90898c1a3685396d
>
> http://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git/commit/?h=for-next&id=4d4e41aef9429872ea3b105e83426941f7185ab6
>
Our emails crossed. I did port that one patch alone and confirm
that the issue is getting fixed. That patch also should go
to v4.0 stable then along with above 2.

Regards,
Santosh


\
 
 \ /
  Last update: 2015-05-19 02:01    [W:0.082 / U:1.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site