Messages in this thread | | | Date | Tue, 31 Oct 2006 18:22:58 +0100 | From | Olivier Galibert <> | Subject | Reading a bunch of file as fast a possible |
| |
After searching for kinda-keywords in a locked-in-memory index, I get a list of 50-100 files out of several hundred thousands I want to read as fast as possible. I can ensure that the directory structure in hot in the dcache by re-reading it from time to time, but there isn't enough memory to lock the documents there. So I'd like to read 50-100 files for which I have the sizes (I put them in the index) and memory space as fast as possible (less than 0.1s would be great) from cold-ish cache.
The best way is I think to find a way to give all the requests to the system and have it sort them optimally at the elevator level. But how can I do that? Can aio do it, or something else?
OG.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |