lkml.org 
[lkml]   [2006]   [Dec]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH] mm: fix page_mkclean_one (was: 2.6.19 file content corruption on ext3)


    On Thu, 21 Dec 2006, Gordon Farquharson wrote:
    >
    > I tested 2.6.19 with a version of Linus's patch that applies cleanly
    > to 2.6.19 (patch appended to the end of this email) on ARM and apt-get
    > failed. It did not segfault this time, but instead got stuck for about
    > 20 to 30 minutes and was accessing the hard drive frequently.

    Ok, there's definitely something screwy going on.

    Andrew located at least one bug: we run cancel_dirty_page() too late in
    "truncate_complete_page()", which means that do_invalidatepage() ends up
    not clearing the page cache.

    His patch is appended.

    But it sounds like I probably misunderstood something, because I thought
    that Martin had acknowledged that this patch actually worked for him.
    Which sounded very similar to your setup (he has a 32M ARM box too, no?)

    And your failure sounds a lot like one that David Miller is reporting. At
    the same time, my own shared file mmap tests on my own machines obviously
    work fine (I lower the dirty-writeback tresholds to force writeback more
    easily, and then mmap a file and write and rewrite to it in memory, and
    truncate it).

    Maybe it's mount option issue? I've got data=ordered on my machine, are
    you perhaps runnign with something else?

    Linus

    ---
    commit 3e67c0987d7567ad666641164a153dca9a43b11d
    Author: Andrew Morton <akpm@osdl.org>
    Date: Thu Dec 21 11:00:33 2006 -0800

    [PATCH] truncate: clear page dirtiness before running try_to_free_buffers()

    truncate presently invalidates the dirty page's buffer_heads then shoots down
    the page. But try_to_free_buffers() will now bale out because the page is
    dirty.

    Net effect: the LRU gets filled with dirty pages which have invalidated
    buffer_heads attached. They have no ->mapping and hence cannot be cleaned.
    The machine leaks memory at an enormous rate.

    Fix this by cleaning the page before running try_to_free_buffers(), so
    try_to_free_buffers() can do its work.

    Also, remember to do dirty-page-acoounting in cancel_dirty_page() so the
    machine won't wedge up trying to write non-existent dirty pages.

    Probably still wrong, but now less so.

    Signed-off-by: Andrew Morton <akpm@osdl.org>
    Signed-off-by: Linus Torvalds <torvalds@osdl.org>

    diff --git a/mm/truncate.c b/mm/truncate.c
    index bf9e296..89a5c35 100644
    --- a/mm/truncate.c
    +++ b/mm/truncate.c
    @@ -60,11 +60,12 @@ void cancel_dirty_page(struct page *page, unsigned int account_size)
    WARN_ON(++warncount < 5);
    }

    - if (TestClearPageDirty(page) && account_size)
    + if (TestClearPageDirty(page) && account_size) {
    + dec_zone_page_state(page, NR_FILE_DIRTY);
    task_io_account_cancelled_write(account_size);
    + }
    }

    -
    /*
    * If truncate cannot remove the fs-private metadata from the page, the page
    * becomes anonymous. It will be left on the LRU and may even be mapped into
    @@ -81,11 +82,11 @@ truncate_complete_page(struct address_space *mapping, struct page *page)
    if (page->mapping != mapping)
    return;

    + cancel_dirty_page(page, PAGE_CACHE_SIZE);
    +
    if (PagePrivate(page))
    do_invalidatepage(page, 0);

    - cancel_dirty_page(page, PAGE_CACHE_SIZE);
    -
    ClearPageUptodate(page);
    ClearPageMappedToDisk(page);
    remove_from_page_cache(page);
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/
    \
     
     \ /
      Last update: 2009-11-18 23:46    [W:4.383 / U:0.336 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site