lkml.org 
[lkml]   [2012]   [Aug]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 11/36] autonuma: add page structure fields
Date
On 64bit archs, 20 bytes are used for async memory migration (specific
to the knuma_migrated per-node threads), and 4 bytes are used for the
thread NUMA false sharing detection logic.

This is the basic implementation improved by later patches.

Later patches moves the new fields to a dynamically allocated
page_autonuma of 32 bytes per page (only allocated if booted on NUMA
hardware, unless "noautonuma" is passed as parameter to the kernel at
boot). Yet another later patch introduces the autonuma_list and
reduces the size of the page_autonuma from 32 to 12 bytes.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
include/linux/mm_types.h | 26 ++++++++++++++++++++++++++
mm/page_alloc.c | 4 ++++
2 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index c80101c..3f10fef 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -152,6 +152,32 @@ struct page {
struct page *first_page; /* Compound tail pages */
};

+#ifdef CONFIG_AUTONUMA
+ /*
+ * FIXME: move to pgdat section along with the memcg and allocate
+ * at runtime only in presence of a numa system.
+ */
+ /*
+ * To modify autonuma_last_nid lockless the architecture,
+ * needs SMP atomic granularity < sizeof(long), not all archs
+ * have that, notably some ancient alpha (but none of those
+ * should run in NUMA systems). Archs without that requires
+ * autonuma_last_nid to be a long.
+ */
+#ifdef CONFIG_64BIT
+ int autonuma_migrate_nid;
+ int autonuma_last_nid;
+#else
+#if MAX_NUMNODES > 32767
+#error "too many nodes"
+#endif
+ /* FIXME: remember to check the updates are atomic */
+ short autonuma_migrate_nid;
+ short autonuma_last_nid;
+#endif
+ struct list_head autonuma_migrate_node;
+#endif
+
/*
* On machines where all RAM is mapped into kernel address space,
* we can simply calculate the virtual address. On machines with
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ff61443..a6337b3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3787,6 +3787,10 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
set_pageblock_migratetype(page, MIGRATE_MOVABLE);

INIT_LIST_HEAD(&page->lru);
+#ifdef CONFIG_AUTONUMA
+ page->autonuma_last_nid = -1;
+ page->autonuma_migrate_nid = -1;
+#endif
#ifdef WANT_PAGE_VIRTUAL
/* The shift won't overflow because ZONE_NORMAL is below 4G. */
if (!is_highmem_idx(zone))

\
 
 \ /
  Last update: 2012-08-22 17:41    [W:0.217 / U:0.192 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site