Messages in this thread | | | Date | Mon, 17 Dec 2012 11:10:44 +0100 | From | Ingo Molnar <> | Subject | Re: [GIT PULL] Automatic NUMA Balancing V11 |
| |
* Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Wed, Dec 12, 2012 at 2:03 AM, Mel Gorman <mgorman@suse.de> wrote: > > This is a pull request for "Automatic NUMA Balancing V11". The list > > Ok, guys, I've pulled this and pushed out. There were some > conflicts with both the VM changes and with the scheduler > tree, but they were pretty small and looked simple, so I fixed > them up and hope they all work.
Cool, thanks Linus!
> Has anybody tested the impact on single-node systems? If > distros enable this by default (and it does have 'default y', > which is a big no-no for new features - I undid that part)
Yes, that was for easy testing, leaving it in was an oversight.
> then there will be tons of people running this without > actually having multiple sockets. Does it gracefully avoid > pointless overheads for this case?
Yes. We have:
+ bool numabalancing_default = false; + + if (IS_ENABLED(CONFIG_NUMA_BALANCING_DEFAULT_ENABLED)) + numabalancing_default = true; + + if (nr_node_ids > 1 && !numabalancing_override) { + printk(KERN_INFO "Enabling automatic NUMA balancing. " + "Configure with numa_balancing= or sysctl"); + set_numabalancing_state(numabalancing_default); + }
The nr_node_ids check makes sure that on single-node systems we don't enable the feature.
At that point it will be some extra passive code in the kernel - last I measured it was around +20K to the kernel image plus a couple of extra branches in a couple of generic paths - but no measurable runtime overhead.
Any other negative impact would either come from preparatory or scalability patches attached to the NUMA balancing feature, which would be a regression we want to fix.
> Anyway, hopefully we'll have a more real numa balancing for > 3.9, and this is still considered a reasonable base for that > work.
We are working on it ;-)
Thanks,
Ingo
| |