lkml.org 
[lkml]   [2012]   [Aug]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
Subject[patch v2] hugetlb: correct page offset index for sharing pmd
From
The computation of page offset index is incorrect to be used in scanning
prio tree, as huge page offset is required, and is fixed with well
defined routine.

Changes from v1
o s/linear_page_index/linear_hugepage_index/ for clearer code
o hp_idx variable added for less change


Signed-off-by: Hillf Danton <dhillf@gmail.com>
---

--- a/arch/x86/mm/hugetlbpage.c Fri Aug 3 20:34:58 2012
+++ b/arch/x86/mm/hugetlbpage.c Fri Aug 3 20:40:16 2012
@@ -62,6 +62,7 @@ static void huge_pmd_share(struct mm_str
{
struct vm_area_struct *vma = find_vma(mm, addr);
struct address_space *mapping = vma->vm_file->f_mapping;
+ pgoff_t hp_idx;
pgoff_t idx = ((addr - vma->vm_start) >> PAGE_SHIFT) +
vma->vm_pgoff;
struct prio_tree_iter iter;
@@ -72,8 +73,10 @@ static void huge_pmd_share(struct mm_str
if (!vma_shareable(vma, addr))
return;

+ hp_idx = linear_hugepage_index(vma, addr);
+
mutex_lock(&mapping->i_mmap_mutex);
- vma_prio_tree_foreach(svma, &iter, &mapping->i_mmap, idx, idx) {
+ vma_prio_tree_foreach(svma, &iter, &mapping->i_mmap, hp_idx, hp_idx) {
if (svma == vma)
continue;

--

\
 
 \ /
  Last update: 2012-08-04 08:41    [W:0.086 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site