lkml.org 
[lkml]   [2002]   [May]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: inode highmem imbalance fix [Re: Bug with shared memory.]
    On Fri, May 24, 2002 at 09:33:41AM +0200, Andrea Arcangeli wrote:
    > Here it is, you should apply it together with vm-35 that you need too
    > for the bh/highmem balance (or on top of 2.4.19pre8aa3). I tested it
    > slightly on uml and it didn't broke so far, so be careful because it's not
    > very well tested yet. On the lines of what Alexey suggested originally,
    > if goal isn't reached, in a second pass we shrink the cache too, but
    > only if the cache is the only reason for the "pinning" beahiour of the
    > inode. If for example there are dirty blocks of metadata or of data
    > belonging to the inode we wakeup_bdflush instead and we never shrink the
    > cache in such case. If the inode itself is dirty as well we let the two
    > passes fail so we will schedule the work for keventd. This logic should
    > ensure we never fall into shrinking the cache for no good reason and
    > that we free the cache only for the inodes that we actually go ahead and
    > free. (basically only dirty pages set with SetPageDirty aren't trapped
    > by the logic before calling the invalidate, like ramfs, but that's
    > expected of course, those pages cannot be damaged by the non destructive
    > invalidate anyways)
    > Comments?

    I haven't had the chance to give this a test run yet, but it looks very
    promising. I have a slight concern about the hold time of the inode_lock
    because prune_icache() already generates some amount of contention,
    but what you've presented appears to be necessary to prevent lethal
    cache bloat, and so that concern is secondary at most. I'll give it a
    test run tomorrow if no one else on-site gets to it first, though with
    the proviso that I've not been involved in workloads triggering this
    specific KVA exhaustion issue, so what testing I can do is limited.


    Thanks,
    Bill
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:26    [W:4.043 / U:0.036 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site