lkml.org 
[lkml]   [2007]   [Jun]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: slow open() calls and o_nonblock
Sorry for the unthreaded responses, I wasn't cc'd here, so I'm
replying to these based on mailing list archives....

Al Viro wrote:

> BTW, why close these suckers all the time? It's not that kernel would
> be unable to hold thousands of open descriptors for your process...
> Hash descriptors by pathname and be done with that; don't bother with
> close unless you decide that you've got too many of them (e.g. when you
> get a hash conflict).

A valid point - I currently keep a pool of 4000 descriptors open and
cycle them out based on inactivity. I hadn't seriously considered
just keeping them all open, because I simply wasn't sure how well
things would go with 100,000 files open. Would my backend storage
keep up... would the kernel mind maintaining 100,000 files open over
NFS?

The majority of the files would simply be idle - I would be keeping
file handles open for no reason. Pooling allows me to substantially
drop the number of opens I require, but I am hesitant to blow the pool
size to substantially higher numbers. Can anyone shed light on any
issues that may come up with a massive pool size, such as 128k?

-Aaron
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-06-04 16:41    [W:0.274 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site