Messages in this thread | | | Date | Thu, 23 Nov 2000 13:51:01 +0100 | From | Roy Sigurd Karlsbakk <> | Subject | Re: ext2 compression: How about using the Netware principle? |
| |
> > - A file is saved to disk > > - If the file isn't touched (read or written to) within <n> days > > (default 14), the file is compressed. > > - If the file isn't compressed more than <n> percent (default 20), the > > file is flagged "can't compress". > > - All file compression is done on low traffic times (default between > > 00:00 and 06:00 hours) > > - The first time a file is read or written to within the <n> days > > interval mentioned above, the file is addressed using realtime > > compression. The second time, the file is decompressed and commited to > > disk (uncompressed). > > Oops, that means that merely reading a file followed by powerfail can > lead to you loosing the file. Oops.
eh.. don't think so. READ DECOMPRESS WRITE SYNC DELETE OLD COMPRESSED FILE or something
> Besides: you can do this in userspace with existing e2compr. Should take > less than 2 days to implement.
ok never seen that...
> > Results: > > A minimum of CPU time is wasted compressing/decompressing files. > > The average server I've been out working with have an effective > > compression of somewhere between 30 and 100 per cent. > > Results: NOP at machines that are never on in that time, random corruption > after powerfail between 0:00-6:00, .. Pavel
I'm talking about file servers. Not merely a bloody PC. On a PC, hard disk space doesn't really cost anything and you can manually compress what you're not using.
roy
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
| |