Messages in this thread | | | Date | Tue, 28 Feb 2023 07:50:36 -0800 (PST) | Subject | Re: [PATCH mm-unstable v1 19/26] riscv/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE | From | Palmer Dabbelt <> |
| |
On Fri, 13 Jan 2023 09:10:19 PST (-0800), david@redhat.com wrote: > Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit > from the offset. This reduces the maximum swap space per file: on 32bit > to 16 GiB (was 32 GiB).
Seems fine to me, I doubt anyone wants a huge pile of swap on rv32.
> > Note that this bit does not conflict with swap PMDs and could also be used > in swap PMD context later. > > While at it, mask the type in __swp_entry(). > > Cc: Paul Walmsley <paul.walmsley@sifive.com> > Cc: Palmer Dabbelt <palmer@dabbelt.com> > Cc: Albert Ou <aou@eecs.berkeley.edu> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > arch/riscv/include/asm/pgtable-bits.h | 3 +++ > arch/riscv/include/asm/pgtable.h | 29 ++++++++++++++++++++++----- > 2 files changed, 27 insertions(+), 5 deletions(-) > > diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h > index b9e13a8fe2b7..f896708e8331 100644 > --- a/arch/riscv/include/asm/pgtable-bits.h > +++ b/arch/riscv/include/asm/pgtable-bits.h > @@ -27,6 +27,9 @@ > */ > #define _PAGE_PROT_NONE _PAGE_GLOBAL > > +/* Used for swap PTEs only. */ > +#define _PAGE_SWP_EXCLUSIVE _PAGE_ACCESSED > + > #define _PAGE_PFN_SHIFT 10 > > /* > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h > index 4eba9a98d0e3..03a4728db039 100644 > --- a/arch/riscv/include/asm/pgtable.h > +++ b/arch/riscv/include/asm/pgtable.h > @@ -724,16 +724,18 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > /* > - * Encode and decode a swap entry > + * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that > + * are !pte_none() && !pte_present(). > * > * Format of swap PTE: > * bit 0: _PAGE_PRESENT (zero) > * bit 1 to 3: _PAGE_LEAF (zero) > * bit 5: _PAGE_PROT_NONE (zero) > - * bits 6 to 10: swap type > - * bits 10 to XLEN-1: swap offset > + * bit 6: exclusive marker > + * bits 7 to 11: swap type > + * bits 11 to XLEN-1: swap offset > */ > -#define __SWP_TYPE_SHIFT 6 > +#define __SWP_TYPE_SHIFT 7 > #define __SWP_TYPE_BITS 5 > #define __SWP_TYPE_MASK ((1UL << __SWP_TYPE_BITS) - 1) > #define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT) > @@ -744,11 +746,28 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, > #define __swp_type(x) (((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK) > #define __swp_offset(x) ((x).val >> __SWP_OFFSET_SHIFT) > #define __swp_entry(type, offset) ((swp_entry_t) \ > - { ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) }) > + { (((type) & __SWP_TYPE_MASK) << __SWP_TYPE_SHIFT) | \ > + ((offset) << __SWP_OFFSET_SHIFT) }) > > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > +#define __HAVE_ARCH_PTE_SWP_EXCLUSIVE > +static inline int pte_swp_exclusive(pte_t pte) > +{ > + return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > +} > + > +static inline pte_t pte_swp_mkexclusive(pte_t pte) > +{ > + return __pte(pte_val(pte) | _PAGE_SWP_EXCLUSIVE); > +} > + > +static inline pte_t pte_swp_clear_exclusive(pte_t pte) > +{ > + return __pte(pte_val(pte) & ~_PAGE_SWP_EXCLUSIVE); > +} > + > #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION > #define __pmd_to_swp_entry(pmd) ((swp_entry_t) { pmd_val(pmd) }) > #define __swp_entry_to_pmd(swp) __pmd((swp).val)
Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
| |