lkml.org 
[lkml]   [2006]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 1/1] network memory allocator.
    On Wed, Aug 16, 2006 at 09:35:46AM +0400, Evgeniy Polyakov wrote:
    > On Tue, Aug 15, 2006 at 10:21:22PM +0200, Arnd Bergmann (arnd@arndb.de) wrote:
    > > Am Monday 14 August 2006 13:04 schrieb Evgeniy Polyakov:
    > > > ?* full per CPU allocation and freeing (objects are never freed on
    > > > ????????different CPU)
    > >
    > > Many of your data structures are per cpu, but your underlying allocations
    > > are all using regular kzalloc/__get_free_page/__get_free_pages functions.
    > > Shouldn't these be converted to calls to kmalloc_node and alloc_pages_node
    > > in order to get better locality on NUMA systems?
    > >
    > > OTOH, we have recently experimented with doing the dev_alloc_skb calls
    > > with affinity to the NUMA node that holds the actual network adapter, and
    > > got significant improvements on the Cell blade server. That of course
    > > may be a conflicting goal since it would mean having per-cpu per-node
    > > page pools if any CPU is supposed to be able to allocate pages for use
    > > as DMA buffers on any node.
    >
    > Doesn't alloc_pages() automatically switches to alloc_pages_node() or
    > alloc_pages_current()?

    That's not what's wanted. If you have a slow interconnect you always want
    to allocate memory on the node the network device is attached to.

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2006-08-16 10:51    [W:6.330 / U:0.172 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site