lkml.org 
[lkml]   [2003]   [Oct]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] Invalidate_inodes can be very slow
William Lee Irwin III <wli@holomorphy.com> wrote:
>> Untested brute-force forward port to 2.6.0-test7-bk4. No idea if the
>> locking is correct or if list movement is done in all needed places.

On Mon, Oct 13, 2003 at 04:08:21AM -0700, Andrew Morton wrote:
> My preferred approach to this would be to move all the global inode lists
> into the superblock so both they and inode_lock become per-sb.
> It is a big change though. And, amazingly, nobody has yet hit sufficient
> inode_lock contention to warrant it.
> Yes, I bet that this search will hurt like hell on a really big box which
> has thousands of auto-expiring NFS mounts. Please test your patch and I'll
> queue it up while we think about it some more.

Generally dcache_lock stands in front of inode_lock, even with the
current hashtable RCU code. inode_lock has been seen before in unusual
situations I don't remember offhand, though generally it's not #1.
The workloads used for, say, benchmark testing don't adequately model
situations like what you just mentioned (or a number of other real-life
usage cases), so per-sb inode_lock may be worth considering on a priori
grounds, though it would probably be better to actually set something
up to test that scenario.

-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:49    [W:0.033 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site