lkml.org 
[lkml]   [2022]   [Jan]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v6 1/9] mm: x86, arm64: add arch_has_hw_pte_young()
On Fri, Jan 07, 2022 at 12:25:07AM -0700, Yu Zhao wrote:
> On Thu, Jan 06, 2022 at 10:30:09AM +0000, Will Deacon wrote:
> > On Wed, Jan 05, 2022 at 01:47:08PM -0700, Yu Zhao wrote:
> > > On Wed, Jan 05, 2022 at 10:45:26AM +0000, Will Deacon wrote:
> > > > On Tue, Jan 04, 2022 at 01:22:20PM -0700, Yu Zhao wrote:
> > > > > diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
> > > > > index 870c39537dd0..56e4ef5d95fa 100644
> > > > > --- a/arch/arm64/tools/cpucaps
> > > > > +++ b/arch/arm64/tools/cpucaps
> > > > > @@ -36,6 +36,7 @@ HAS_STAGE2_FWB
> > > > > HAS_SYSREG_GIC_CPUIF
> > > > > HAS_TLB_RANGE
> > > > > HAS_VIRT_HOST_EXTN
> > > > > +HW_AF
> > > > > HW_DBM
> > > > > KVM_PROTECTED_MODE
> > > > > MISMATCHED_CACHE_TYPE
> > > >
> > > > As discussed in the previous threads, we really don't need the complexity
> > > > of the additional cap for the arm64 part. Please can you just use the
> > > > existing code instead? It's both simpler and, as you say, it's equivalent
> > > > for existing hardware.
> > > >
> > > > That way, this patch just ends up being a renaming exercise and we're all
> > > > good.
> > >
> > > No, renaming alone isn't enough. A caller needs to disable preemption
> > > before calling system_has_hw_af(), and I don't think it's reasonable
> > > to ask this caller to do it on x86 as well.
> > >
> > > It seems you really prefer not to have HW_AF. So the best I can
> > > accommodate, considering other potential archs, e.g., risc-v (I do
> > > plan to provide benchmark results on risc-v, btw), is:
> > >
> > > static inline bool arch_has_hw_pte_young(bool local)
> > > {
> > > bool hw_af;
> > >
> > > if (local) {
> > > WARN_ON(preemptible());
> > > return cpu_has_hw_af();
> > > }
> > >
> > > preempt_disable();
> > > hw_af = system_has_hw_af();
> > > preempt_enable();
> > >
> > > return hw_af;
> > > }
> > >
> > > Or please give me something else I can call without disabling
> > > preemption, sounds good?
> >
> > Sure thing, let me take a look. Do you have your series on a public git
> > tree someplace?
>
> Thanks!
>
> This patch (updated) on Gerrit:
> https://linux-mm-review.googlesource.com/c/page-reclaim/+/1500/1

How about folding in something like the diff below? I've basically removed
that 'bool local' argument and dropped the preemptible() check from the
arm64 code.

Will

--->8

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 280123916fc2..990358eca359 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -998,27 +998,14 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
* the pte is old and cannot be marked young. So we always end up with zeroed
* page after fork() + CoW for pfn mappings. We don't always have a
* hardware-managed access flag on arm64.
- *
- * The system-wide support isn't used when involving correctness and therefore
- * is allowed to be flaky.
*/
-static inline bool arch_has_hw_pte_young(bool local)
-{
- WARN_ON(local && preemptible());
-
- return cpu_has_hw_af();
-}
-#define arch_has_hw_pte_young arch_has_hw_pte_young
+#define arch_has_hw_pte_young cpu_has_hw_af

/*
* Experimentally, it's cheap to set the access flag in hardware and we
* benefit from prefaulting mappings as 'old' to start with.
*/
-static inline bool arch_wants_old_prefaulted_pte(void)
-{
- return arch_has_hw_pte_young(true);
-}
-#define arch_wants_old_prefaulted_pte arch_wants_old_prefaulted_pte
+#define arch_wants_old_prefaulted_pte cpu_has_hw_af

static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
{
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index c60b16f8b741..3908780fc408 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1398,7 +1398,7 @@ static inline bool arch_has_pfn_modify_check(void)
}

#define arch_has_hw_pte_young arch_has_hw_pte_young
-static inline bool arch_has_hw_pte_young(bool local)
+static inline bool arch_has_hw_pte_young(void)
{
return true;
}
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 599cc232d5c4..0bd1beadb545 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -260,15 +260,12 @@ static inline int pmdp_clear_flush_young(struct vm_area_struct *vma,

#ifndef arch_has_hw_pte_young
/*
- * Return whether the accessed bit is supported by the local CPU or system-wide.
+ * Return whether the accessed bit is supported by the local CPU.
*
- * This stub assumes accessing thru an old PTE triggers a page fault.
+ * This stub assumes accessing through an old PTE triggers a page fault.
* Architectures that automatically set the access bit should overwrite it.
- *
- * Note that the system-wide support can be flaky and therefore shouldn't be
- * used when involving correctness.
*/
-static inline bool arch_has_hw_pte_young(bool local)
+static inline bool arch_has_hw_pte_young(void)
{
return false;
}
diff --git a/mm/memory.c b/mm/memory.c
index ead6c7d4b9a1..1f02de6d51e4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2743,7 +2743,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src,
* On architectures with software "accessed" bits, we would
* take a double page fault, so mark it accessed here.
*/
- if (!arch_has_hw_pte_young(true) && !pte_young(vmf->orig_pte)) {
+ if (!arch_has_hw_pte_young() && !pte_young(vmf->orig_pte)) {
pte_t entry;

vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
\
 
 \ /
  Last update: 2022-01-11 15:19    [W:0.134 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site