lkml.org 
[lkml]   [2015]   [Sep]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 10/26] x86, pkeys: notify userspace about protection key faults
From
Date
On 09/25/2015 12:11 AM, Ingo Molnar wrote:
>>> > > Btw., how does pkey support interact with hugepages?
>> >
>> > Surprisingly little. I've made sure that everything works with huge pages and
>> > that the (huge) PTEs and VMAs get set up correctly, but I'm not sure I had to
>> > touch the huge page code at all. I have test code to ensure that it works the
>> > same as with small pages, but everything worked pretty naturally.
> Yeah, so the reason I'm asking about expectations is that this code:
>
> + follow_ret = follow_pte(tsk->mm, address, &ptep, &ptl);
> + if (!follow_ret) {
> + /*
> + * On a successful follow, make sure to
> + * drop the lock.
> + */
> + pte = *ptep;
> + pte_unmap_unlock(ptep, ptl);
> + ret = pte_pkey(pte);
>
> is visibly hugepage-unsafe: if a vma is hugepage mapped, there are no ptes, only
> pmds - and the protection key index lives in the pmd. We don't seem to recover
> that information properly.

You got me on this one. I assumed that follow_pte() handled huge pages.
It does not.

But, the code still worked. Since follow_pte() fails for all huge
pages, it just falls back to pulling the protection key out of the VMA,
which _does_ work for huge pages.

I've actually removed the PTE walking and I just now use the VMA
directly. I don't see a ton of additional value from walking the page
tables when we can get what we need from the VMA.


\
 
 \ /
  Last update: 2015-09-26 01:41    [W:0.096 / U:1.680 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site