lkml.org 
[lkml]   [2000]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: ext3-0.0.2e released
Date
On Thu, 06 Jul 2000, Steve Whitehouse wrote:
> can you explain "phase tree" and/or give a reference ?

I can describe it and give a reference. It's the algorithm used in my Tux2
filesystem that I've been working on since around Christmas, or longer than that
if you count 10 years of thinking about doing it :-)

Phase tree is my name for an algorithm similar to one that has been used in WAFL
and in a OS named Auragen that you can ask Victor Yodaiken about. I
developed the algorithm independently and it was only on reading a posting from
Victor on linux-kernel in from June, 1997, that I realized I wasn't the only
one to have thought of it.

My phase tree algorithm is different enough from the other two that I think it's
fair to call it a new algorithm, or at least a close cousin. I'm writing a
white paper on it for presentation at the ALS this fall. An abstract is
available now. I have a working prototype of Tux2 "with some issues" that I'm
now busily porting form 2.2.12 to 2.4.0.test.

I have attached Victor's original email, which makes very good reading. You can
get the Tux2 abstract by emailing me... (I'm very interested in finding out
exactly who is interested.)

Here is a brief description:

Tux2 is based on Ext2. It is not a journalling filesystem, but it does what a
journalling filesystem does: keep your files safe in the event of a processing
interruption. It does that for both data and metadata and, according to my
early benchmarks, should do it at about the same speed at which a JFS does
metadata-only. We shall see.

Tux2 uses my "phase tree" algorithm (so christened by Alan Cox - I called it
tree/phase but I like his name more). Phase tree imposes a partial ordering on
disk writes to ensure that a filesystem on disk is always updated atomically,
with a single write of the filesystem metaroot. To work properly, the entire
filesystem including all metadata, must be structured as a tree. Ext2 is not
structured as a tree, therefore, the major difference between Ext2 and Tux2 is
that all metadata has been rearranged into a tree.

Once you have the filesystem in the form of a tree you can make a copy of
the metaroot, then for all updates, apply a "branching" algorithm that works
from the updated block towards the metaroot doing a copy-on-write at each node
that needs updating. After some number of updates (the exact number is a
performance-tuning parameter) you store the new metaroot on disk, which gives
you an atomic update. So far this is similar to Auragen and WAFL.

Tux2's phase tree algorithm works almost entirely in cache and is intimately
coupled to the buffer cache system. A third metatree is added, to allow
filesystem updating to continue without pause while the second tree is commited
to disk, eventually replacing the first metatree using the abovementioned
atomic write.

In tree phase terminology, the three trees are called "phases". The three
phases are:
- recorded phase (the consistent filesystem image currently on disk)
- recording phase (diffs for a new consistent image currently being written)
- branching phase (the changing filesystem as applications see it)

Tux2 has its own update daemon that handles its "phase transitions". A phase
transition is the act of commiting a new metaroot wherein the second phase tree
becomes the first, the third becomes the second and a new metaroot is created.
(This is a function analgous to kflushd, though kflushd in its current form
can't possibly know what it would need to know to cause phase transistions at
appropriate times, and in any event, it has no way to initiate one.)

That's basically it. There are some other wrinkles in Tux2 that serve to
flatten the filesystem tree, reduce the number of block writes required and
keep cpu usage to a reasonable level.

As Alan mentioned, there are many interesting things you can do when you have a
filesystem's metadata in the form of a tree. Tux2 doesn't do most of those
things at this point, since its main purpose in life is to demonstrate the
efficacy of the phase tree algorithm and to allow me to do kernel development
without putting my precious files at risk every time I need to reset the system.

Please forgive me for my delay in responding - I've been busy settling into a
new job (which should be easily guessable from my email address).

> >
> > > You can do the same today on Linux with LVM snapshots. They are only
> > > useful for read-only consistency checking because they are read-only.
> > > (so in case of a problem you'll need to umount and rerun fsck on the
> > > normal device)
> >
> > Also there is ext2 based work going on using phase tree rather than journalling
> > which gives you similar journal properties, in future snapshots and also
> > very nice handling of multipath error recovery.
> >
> > Alan

--
Daniel
<!-- received="Mon Jun 16 10:55:02 1997 EST" -->
<!-- sent="Mon, 16 Jun 1997 09:36:42 -0600" -->
<!-- name="Victor Yodaiken" -->
<!-- email="yodaiken@chelm.cs.nmt.edu" -->
<!-- subject="Re: journaling filesystem" -->
<!-- id="199706161536.JAA29853@chelm.cs.nmt.edu" -->
<!-- inreplyto="geerten@bart.nl" -->
<title>Linux-Kernel Archive: Re: journaling filesystem</title>
<h1>Re: journaling filesystem</h1>
Victor Yodaiken (<i>yodaiken@chelm.cs.nmt.edu</i>)<br>
<i>Mon, 16 Jun 1997 09:36:42 -0600</i>
<p>
<ul>
<li> <b>Messages sorted by:</b> <a href="date.html#29">[ date ]</a><a href="index.html#29">[ thread
]</a><a href="subject.html#29">[ subject ]</a><a href="author.html#29">[ author ]</a>
<!-- next="start" -->
<li> <b>Next message:</b> <a href="0030.html">Olaf Titz: "Re: kerneld/multicast bug (tickled by gat
ed)"</a>
<li> <b>Previous message:</b> <a href="0028.html">Rogier Wolff: "TritonII IDE interface not PCI com
pliant?"</a>
<!-- nextthread="start" -->
<!-- reply="end" -->
</ul>
<hr>
<!-- body="start" -->
On Jun 16, 4:02am, stephen farrell wrote:<br>
Subject: Re: journaling filesystem<br>
<i>&gt;But you cannot guarantee that you'll make it through commiting these</i><br>
<i>&gt;changes, so I don't understand what this gains you... at best it</i><br>
<i>&gt;sounds like you'd just be minimizing the window in which a crash will</i><br>
<i>&gt;hurt you.</i><br>
<p>
I haven't thought about this issue for a long time, and perhaps <br>
I've become even more confused, but I don't think you are correct.<br>
<p>
You can have a safe commit. The algorithm, as is standard<br>
for DB logging, is to keep two instances of the FS -- one that<br>
is a primary, consistent, version, and one that is the current<br>
version. Suppose we have two SuperBlocks, one primary, one current<br>
with each pointing to its own free list and inode table.<br>
All data is written to free blocks, all freed blocks<br>
are placed on a "zombie list" until commit. The inode table is<br>
used to allow new inodes to be allocated on changes -- so that<br>
inode numbers are not associated with fixed disk locations. <br>
The commit can be done in several ways, but to start, assume<br>
that we simply flush all dirty buffers and then write the <br>
current SB and current inode table, then we mark the current <br>
SB as the primary SB -- this write is atomic and flips us from<br>
one consistent FS to another. On recovery or reboot, the <br>
primary is used, the zombie list is copied to the free list, and<br>
the primary is written over current. <br>
<p>
The FS I'm remembering was designed for a relatively simple straightforward<br>
Unix FS, but there are numerous obvious optimizations -- e.g. to <br>
use the scheme per cgroup or to set aside pairs of cyl-groups, adding<br>
a 3rd level for in memory current while current is being flushed ...<br>
One thing this algorithm does not do is work well when the disk is<br>
nearly full, but fault-tolerance must cost something, and disks are<br>
cheap. The algorithm can work well with a simple buffer cache<br>
flushing algorithm, while full journaling seems to require that<br>
buffers get flushed in a particular order.<br>
I think that is a compelling advantage because the interaction of<br>
VM paging and FS buffer cache use is too complicated as it is.<br>
<p>
I'm a little skeptical of the Sprite LFS/Zebra/ and similar <br>
journaling FS schemes for several reasons.<br>
1. The recovery time seems like it could be quite long.<br>
3. The paradigm of dumping everything in one massive write is suspect<br>
in a HA environment: 128M buffer flushed at 60ns<br>
per word is about 2 seconds. That's a very long time. <br>
<p>
Here's a reference which describes the FS briefly and also talks about<br>
the other virtues of the Auros OS which died along with my stock<br>
options so many years ago.<br>
<p>
@Article{BorgBlauGraetschHerrmannOberle89,<br>
key = "Borg et al.",<br>
author = "Anita Borg and Wolfgang Blau and Wolfgang Graetsch and<br>
Ferdinand Herrmann and Wolfgang Oberle",<br>
title = "Fault Tolerance Under {UNIX}",<br>
journal = "ACM Transactions on Computer Systems",<br>
pages = "1--24",<br>
volume = "7",<br>
number = "1",<br>
month = feb,<br>
year = "1989",<br>
<!-- body="end" -->
<hr>
<p>
<ul>
<!-- next="start" -->
<li> <b>Next message:</b> <a href="0030.html">Olaf Titz: "Re: kerneld/multicast bug (tickled by gated)"</a>
<li> <b>Previous message:</b> <a href="0028.html">Rogier Wolff: "TritonII IDE interface not PCI compliant?"</a>
<!-- nextthread="start" -->
<!-- reply="end" -->
</ul>

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.057 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site