lkml.org 
[lkml]   [2012]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 05/11] thp: change_huge_pmd(): keep huge zero page write-protected
On Wed, Nov 14, 2012 at 03:12:54PM -0800, David Rientjes wrote:
> On Wed, 7 Nov 2012, Kirill A. Shutemov wrote:
>
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index d767a7c..05490b3 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1259,6 +1259,8 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> > pmd_t entry;
> > entry = pmdp_get_and_clear(mm, addr, pmd);
> > entry = pmd_modify(entry, newprot);
> > + if (is_huge_zero_pmd(entry))
> > + entry = pmd_wrprotect(entry);
> > set_pmd_at(mm, addr, pmd, entry);
> > spin_unlock(&vma->vm_mm->page_table_lock);
> > ret = 1;
>
> Nack, this should be handled in pmd_modify().

I disagree. It means we will have to enable hzp per arch. Bad idea.

What's wrong with the check?

--
Kirill A. Shutemov
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2012-11-15 10:21    [W:0.318 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site