lkml.org 
[lkml]   [2022]   [Aug]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v1 2/2] mm/hugetlb: support write-faults in shared mappings
Date
Let's add a safety net if we ever get (again) a write-fault on a R/O-mapped
page in a shared mapping, in which case we simply have to map the
page writable.

VM_MAYSHARE handling in hugetlb_fault() for FAULT_FLAG_WRITE
indicates that this was at least envisioned, but could never have worked
as expected. This theoretically paves the way for softdirty tracking
support in hugetlb.

Tested without the fix for softdirty tracking.

Note that there is no need to do any kind of reservation in hugetlb_fault()
in this case ... because we already have a hugetlb page mapped R/O
that we will simply map writable and we are not dealing with COW/unsharing.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/hugetlb.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a18c071c294e..bbab7aa9d8f8 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5233,6 +5233,16 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
VM_BUG_ON(unshare && (flags & FOLL_WRITE));
VM_BUG_ON(!unshare && !(flags & FOLL_WRITE));

+ /* Let's take out shared mappings first, this should be a rare event. */
+ if (unlikely(vma->vm_flags & VM_MAYSHARE)) {
+ if (unshare)
+ return 0;
+ if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE)))
+ return VM_FAULT_SIGSEGV;
+ set_huge_ptep_writable(vma, haddr, ptep);
+ return 0;
+ }
+
pte = huge_ptep_get(ptep);
old_page = pte_page(pte);

@@ -5767,12 +5777,11 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
* If we are going to COW/unshare the mapping later, we examine the
* pending reservations for this page now. This will ensure that any
* allocations necessary to record that reservation occur outside the
- * spinlock. For private mappings, we also lookup the pagecache
- * page now as it is used to determine if a reservation has been
- * consumed.
+ * spinlock. Also lookup the pagecache page now as it is used to
+ * determine if a reservation has been consumed.
*/
if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) &&
- !huge_pte_write(entry)) {
+ !(vma->vm_flags & VM_MAYSHARE) && !huge_pte_write(entry)) {
if (vma_needs_reservation(h, vma, haddr) < 0) {
ret = VM_FAULT_OOM;
goto out_mutex;
@@ -5780,9 +5789,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
/* Just decrements count, does not deallocate */
vma_end_reservation(h, vma, haddr);

- if (!(vma->vm_flags & VM_MAYSHARE))
- pagecache_page = hugetlbfs_pagecache_page(h,
- vma, haddr);
+ pagecache_page = hugetlbfs_pagecache_page(h, vma, haddr);
}

ptl = huge_pte_lock(h, mm, ptep);
--
2.35.3
\
 
 \ /
  Last update: 2022-08-05 13:04    [W:0.377 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site