lkml.org 
[lkml]   [2019]   [Dec]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling
On Fri 06-12-19 09:24:26, Thomas Hellström (VMware) wrote:
[...]
> @@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
> pfn = page_to_pfn(page);
> }
>
> + /*
> + * Note that the value of @prot at this point may differ from
> + * the value of @vma->vm_page_prot in the caching- and
> + * encryption bits. This is because the exact location of the
> + * data may not be known at mmap() time and may also change
> + * at arbitrary times while the data is mmap'ed.
> + * This is ok as long as @vma->vm_page_prot is not used by
> + * the core vm to set caching- and encryption bits.
> + * This is ensured by core vm using pte_modify() to modify
> + * page table entry protection bits (that function preserves
> + * old caching- and encryption bits), and the @fault
> + * callback being the only function that creates new
> + * page table entries.
> + */

While this is a very valuable piece of information I believe we need to
document this in the generic code where everybody will find it.
vmf_insert_mixed_prot sounds like a good place to me. So being explicit
about VM_MIXEDMAP. Also a reference from vm_page_prot to this function
would be really helpeful.

Thanks!

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2019-12-06 11:31    [W:0.065 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site