Messages in this thread | | | Subject | Re: [PATCH] robust ext2fs against failure | Date | Tue, 24 Aug 1999 14:59:20 +0900 (JST) | From | MIYOSHI Kazuto <> |
| |
Hello, folks.
Hirokazu Takahashi <h-takaha@sss.abk.nec.co.jp> writes:
Hirokazu> > 'Just mount ext2fs with "sync" option, ext2fs will keep Hirokazu> > consistency of filesystem. Your filesystem would never be broken.' Hirokazu> > in http://www03.u-page.so-net.ne.jp/da2/h-takaha/userguide.html
As his colleague, I'd like to add some explanation about the "ext2 robust" patch, showing my own experience.
Last month I tried to compile linux 2.2.x kernel but it failed. Doubting that I made some mistakes, I checked error log and eventually I found that NTFS makefile had turned into a strange binary file. It is worth noticing that I have never touched NTFS makefile before (I do not use NTFS) and my machine crashes a few times per month.
I can not say positively, but perhaps it is because some new directory entry which contains garbage (on disk) was allocated for some new file. If the garbage data points NTFS makefile accidentally and system crashes before the directory entry is written back, new file and NTFS makefile are doubly hard-linked. Someone writes binary data to the new file, NTFS makefile would be also altered...
The patch prevents these filesystem corruption, by altering write-order in ext2 fs (on above example, inode syncronously written first, data block asyncronously written in future. Look at the URL for more details)
Although robustness is up to the write-order inside the disk unit when power failed, as Hubert Tonneau <hubert.tonneau@heliosam.fr> previously pointed out.
Hirokazu> It seems to be not good idea to configure disk-cache as Hirokazu> write-through-cache that leads a poor performance. Hirokazu> Is there any cool solution?
In some critical fields, it is acceptable to use write-through-cache mode, in spite of performance decrease (I have heard that write through cache mode on SCSI disk suffers performance decrease of 5 to 10%)
Hirokazu> I think journalling filesystem like XFS has the same problem. Hirokazu> Writting order of "journal" and "metadata" on disk-cache to Hirokazu> real-disk might be changed in some situation, it would cause Hirokazu> a filesystem corruption Hirokazu> which can't be detected at journalled recovery after a crash.
As he says, the same kind of robustness can be achieved from journalling, but saving intent log to the disk is not always necessary for robustness. Writing log to the disk synchronously decreases performance.
And more, needless to say, it is not so important to save data block contents (they are lost by crash anyway), but to save file system consistency.
Thank you.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |