lkml.org 
[lkml]   [2022]   [Feb]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH V8 33/44] memremap_pages: Introduce pgmap_protection_available()
On Thu, Jan 27, 2022 at 9:55 AM <ira.weiny@intel.com> wrote:
>
> From: Ira Weiny <ira.weiny@intel.com>
>
> Users will need to specify that they want their dev_pagemap pages
> protected by specifying a flag in (struct dev_pagemap)->flags. However,
> it is more efficient to know if that protection is available prior to
> requesting it and failing the mapping.
>
> Define pgmap_protection_available() for users to check if protection is
> available to be used. The name of pgmap_protection_available() was
> specifically chosen to isolate the implementation of the protection from
> higher level users. However, the current implementation simply calls
> pks_available() to determine if it can support protection.
>
> It was considered to have users specify the flag and check if the
> dev_pagemap object returned was protected or not. But this was
> considered less efficient than a direct check beforehand.
>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
>
> ---
> Changes for V8
> Split this out to it's own patch.
> s/pgmap_protection_enabled/pgmap_protection_available
> ---
> include/linux/mm.h | 13 +++++++++++++
> mm/memremap.c | 11 +++++++++++
> 2 files changed, 24 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index e1a84b1e6787..2ae99bee6e82 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1143,6 +1143,19 @@ static inline bool is_pci_p2pdma_page(const struct page *page)
> page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
> }
>
> +#ifdef CONFIG_DEVMAP_ACCESS_PROTECTION
> +
> +bool pgmap_protection_available(void);
> +
> +#else
> +
> +static inline bool pgmap_protection_available(void)
> +{
> + return false;
> +}
> +
> +#endif /* CONFIG_DEVMAP_ACCESS_PROTECTION */
> +
> /* 127: arbitrary random number, small enough to assemble well */
> #define folio_ref_zero_or_close_to_overflow(folio) \
> ((unsigned int) folio_ref_count(folio) + 127u <= 127u)
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 6aa5f0c2d11f..c13b3b8a0048 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -6,6 +6,7 @@
> #include <linux/memory_hotplug.h>
> #include <linux/mm.h>
> #include <linux/pfn_t.h>
> +#include <linux/pkeys.h>
> #include <linux/swap.h>
> #include <linux/mmzone.h>
> #include <linux/swapops.h>
> @@ -63,6 +64,16 @@ static void devmap_managed_enable_put(struct dev_pagemap *pgmap)
> }
> #endif /* CONFIG_DEV_PAGEMAP_OPS */
>
> +#ifdef CONFIG_DEVMAP_ACCESS_PROTECTION
> +
> +bool pgmap_protection_available(void)
> +{
> + return pks_available();
> +}
> +EXPORT_SYMBOL_GPL(pgmap_protection_available);

Any reason this was chosen to be an out-of-line function? Doesn't this
defeat the performance advantages of static_cpu_has()?

> +
> +#endif /* CONFIG_DEVMAP_ACCESS_PROTECTION */
> +
> static void pgmap_array_delete(struct range *range)
> {
> xa_store_range(&pgmap_array, PHYS_PFN(range->start), PHYS_PFN(range->end),
> --
> 2.31.1
>

\
 
 \ /
  Last update: 2022-02-04 17:20    [W:0.615 / U:0.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site