Messages in this thread | | | Date | Wed, 20 Sep 2006 12:28:10 +0100 | From | "Chris Jefferson" <> | Subject | Allocated large blocks of memory on 64 bit linux. |
| |
I apologise for this slightly off-topic message, but I believe it can best be answered here, and hope the question may be interesting.
Many libraries have some kind of dynamically sized container (for example C++'s std::vector). When the container is full a new block of memory, typically double the original size, is allocated and the old data copied across.
On a 64 bit architecture, where the memory space is massive, it seems at first glance a sensible thing to do might be to first make a buffer of size 4k, and then when this fills up, just straight to something huge, like 1MB or even 1GB, as the memory space is effectively infinate compared to the physical memory. Obvious most of this buffer may never be written to, as the object never grows large enough to fill it.
What is the overhead of allocating memory which is never used? Is this a sensible course of action on 64-bit architectures?
Thank you - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |