lkml.org 
[lkml]   [2012]   [Aug]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectQ: diskstats for MD-RAID
Hello!

I have a question based on the SLES11 SP1 kernel (2.6.32.59-0.3-default):
In /proc/diskstats the last four values seem to be zero for md-Devices.

So "%util", "await", and "svctm" from "sar" are always reported as zero.

Ist this a bug or a feature? I'm tracing a fairness problem resulting from an I/O bottleneck similar to that described in kernel bugzilla #12309...

(If the kernel has about 80GB dirty buffers (yes: 80GB), reads using the same I/O channel seem to starve: The scenario is like this: a FC-SAN disksystem with two different types of disks is used to copy from the faster disks to slower disks using "cp". The files are some ten GB in size (Oracle database). After several minutes (while the "cp" is still runing), unrelated processes accessing different disk devices through the same I/O channel suffer from bad response times. I guess the kernel does not know about the relationship of different disk devices being connected through on I/O channel: If the kernel tries to keep each device busy (specifically trying to flush dirty buffers from one disk to make available buffers, it really reduces the I/O rate of other disks. Despite of that, some layers combine 8-sector-requests to something like 600-sector requests, which probably also needs additional buffers and it will hit the response time. The complete I/O stack is: FC-SAN, multipath (RR)
, MD-RAID1, LVM, ext3)

When replying, please keep me in CC: as I'm not subscribed to the list.

Regards,
Ulrich




\
 
 \ /
  Last update: 2012-08-08 09:23    [W:0.027 / U:1.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site