lkml.org 
[lkml]   [2008]   [Oct]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH] x86: remove wrong -1 in calling init_memory_mapping
From: Shaohua Li <shaohua.li@intel.com>

impact: make memory hot plug got last page mapped.

Shuahua Li found:
Round up address to a page, otherwise the last page isn't mapped.

No, I just did some experiments on a desktop for memory hotplug and this bug
triggered a crash in my test.
Yinghai's suggestion also fixed the bug. I just want to have safer method. Anyway, either approach is ok to me.

So acctually we don't need to round it.
just remove that extra -1

Signed-off-by: Yinghai <yinghai@kernel.org>

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index d59e4c9..2884b17 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -837,7 +837,7 @@ int arch_add_memory(int nid, u64 start, u64 size)
unsigned long nr_pages = size >> PAGE_SHIFT;
int ret;

- last_mapped_pfn = init_memory_mapping(start, start + size-1);
+ last_mapped_pfn = init_memory_mapping(start, start + size);
if (last_mapped_pfn > max_pfn_mapped)
max_pfn_mapped = last_mapped_pfn;


\
 
 \ /
  Last update: 2008-10-27 21:07    [W:0.026 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site