Messages in this thread | | | Date | Thu, 10 Apr 2003 22:01:43 -0700 (PDT) | Subject | Re: proc_misc.c bug | From | "Randy.Dunlap" <> |
| |
> On 10 Apr 2003 22:44:17 +0100 Alan Cox <alan@lxorguk.ukuu.org.uk> wrote: > > | On Thu, 2003-04-10 at 23:02, David Mosberger wrote: > | > The workaround below is to allocate 4KB per 8 CPUs. Not really a | > > solution, but the fundamental problem is that /proc/interrupts | > shouldn't > use a fixed buffer size in the first place. I suppose | > another solution > would be to use vmalloc() instead. It all feels like | > bandaids though. > | > | How about switching to Al's seqfile interface ? > > It's already using it, but it uses the simple/single version of it, which > doesn't automagically extend the output buffer area when > it's full, so a max size buffer has to be allocated for it > up front. > > I'll look at changing it unless somebody beats me (to it :).
OK, I've looked at it and concluded that it's not bad the way it is (after David's patch is applied). However, that really depends on whether the static NR_CPUS is well-tuned or not. If it's not tuned, then modifying the output to use the iterative seq_file methods would make sense. But if it's not tuned, someone is (usually) wasting lots of memory anyway.
[adding this:] Or a better approximation instead of NR_CPUS might be to add and use something a bit more intelligent about how many interrupts are in use. However, this makes my later argument not so strong.
The reason that I say it's not bad the way it is is that it does one kmalloc() for the output buffer and then it can run quickly and do its job [but it's limited to the original kmalloc buffer size]. If it's modified to use seq_file iteration, it will start out with kmalloc(PAGE_SIZE) in seq_read(), and when that fills up, a buffer twice that size is kmalloc-ed, repeat.... with earlier buffers being freed first.
Does someone want to disagree now? go a() instead. It all feels like | > bandaids though. > | > | How about switching to Al's seqfile interface ? > > It's already using it, but it uses the simple/single version of it, which > doesn't automagically extend the output buffer area when > it's full, so a max size buffer has to be allocated for it > up front. > > I'll look at changing it unless somebody beats me (to it :).
OK, I've looked at it and concluded that it's not bad the way it is (after David's patch is applied). However, that really depends on whether the static NR_CPUS is well-tuned or not. If it's not tuned, then modifying the output to use the iterative seq_file methods would make sense. But if it's not tuned, someone is (usually) wasting lots of memory anyway.
[adding this:] Or a better approximation instead of NR_CPUS might be to add and use something a bit more intelligent about how many interrupts are in use. However, this makes my later argument not so strong.
The reason that I say it's not bad the way it is is that it does one kmalloc() for the output buffer and then it can run quickly and do its job [but it's limited to the original kmalloc buffer size]. If it's modified to use seq_file iteration, it will start out with kmalloc(PAGE_SIZE) in seq_read(), and when that fills up, a buffer twice that size is kmalloc-ed, repeat.... with earlier buffers being freed first.
Does someone want to disagree now? go ahead...i'm listening. Maybe the reason to modify it is that NR_CPUS is not a good approximation/hint/clue.
~Randy
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |