Messages in this thread | | | From | (Måns Rullgård) | Subject | Re: Software RAID5 with 2.6.0-test | Date | Thu, 09 Oct 2003 01:44:13 +0200 |
| |
Torrey Hoffman <thoffman@arnor.net> writes:
> My experience: > > I'm running 2.6.0-test6 on a dual pentium 3 with software raid-5 across > 5 disks on two different IDE hardware controllers (VIA and Promise). > I've got a 224 GB reiserfs partition on that. > > After 8 days uptime, it doesn't seem to have blown up yet. However I > don't stress it heavily - just a nightly rsync or two which does a lot > of reading and writing, and I export my music collection on it via NFS, > which is a low level of read activity.
When I tried it, I was running 2.6.0-test4. The RAID5 was 4 120 GB Seagate disks on a Highpoint controller. On top of that, I had LVM, with ext3 fs. After just minutes, strange things started happening to files. Some had random bits changed in the inode, others were just trashed. e2fsck complained a great deal.
I went back to 2.4.21, which is working OK. A couple of things bother me, though. In the dmesg output there are many of these:
raid5: switching cache buffer size, 8192 --> 1024 raid5: switching cache buffer size, 1024 --> 4096 raid5: switching cache buffer size, 4096 --> 1024 raid5: switching cache buffer size, 1024 --> 4096 raid5: switching cache buffer size, 4096 --> 1024
ISTR reading somewhere, that this has a bad impact on performance.
The other thing that I don't like, is the performance of the RAID array. The disks individually give ~40 MB/s read speed, but the array only measures 25 MB/s. I was of the impression, that RAID5 would give read speeds at least equal to the underlying disks. Is this incorrect?
-- Måns Rullgård mru@users.sf.net
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |