lkml.org 
[lkml]   [2022]   [Jan]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH V8 34/44] memremap_pages: Introduce a PGMAP_PROTECTION flag
    Date
    From: Ira Weiny <ira.weiny@intel.com>

    The persistent memory (PMEM) driver uses the memremap_pages facility to
    provide 'struct page' metadata (vmemmap) for PMEM. Given that PMEM
    capacity maybe orders of magnitude higher capacity than System RAM it
    presents a large vulnerability surface to stray writes. Unlike stray
    writes to System RAM, which may result in a crash or other undesirable
    behavior, stray writes to PMEM additionally are more likely to result in
    permanent data loss. Reboot is not a remediation for PMEM corruption
    like it is for System RAM.

    Given that PMEM access from the kernel is limited to a constrained set
    of locations (PMEM driver, Filesystem-DAX, and direct-I/O to a DAX
    page), it is amenable to supervisor pkey protection.

    Some systems which have enabled DEVMAP_ACCESS_PROTECTION may not have
    PMEM installed. Or the PMEM may not be mapped into the direct map.

    Also users other than PMEM of memremap_pages() will not want these pages
    protected.

    Define a new PGMAP flag, PGMAP_PROTECTION. This can be passed in
    (struct dev_pagemap)->flags when calling memremap_pages() to request
    that the pages be protected. Then use the flag to enable a static key.
    The static key is used to optimize the protection away if no callers are
    currently using protections.

    Specifying this flag on a system which can't support protections will
    fail. Users are expected to check if protections are supported via
    pgmap_protection_available() prior to asking for them.

    Signed-off-by: Ira Weiny <ira.weiny@intel.com>

    ---
    Changes for V8
    Split this out into it's own patch
    ---
    include/linux/memremap.h | 1 +
    mm/memremap.c | 36 ++++++++++++++++++++++++++++++++++++
    2 files changed, 37 insertions(+)

    diff --git a/include/linux/memremap.h b/include/linux/memremap.h
    index 1fafcc38acba..84402f73712c 100644
    --- a/include/linux/memremap.h
    +++ b/include/linux/memremap.h
    @@ -80,6 +80,7 @@ struct dev_pagemap_ops {
    };

    #define PGMAP_ALTMAP_VALID (1 << 0)
    +#define PGMAP_PROTECTION (1 << 1)

    /**
    * struct dev_pagemap - metadata for ZONE_DEVICE mappings
    diff --git a/mm/memremap.c b/mm/memremap.c
    index c13b3b8a0048..a74d985a1908 100644
    --- a/mm/memremap.c
    +++ b/mm/memremap.c
    @@ -66,12 +66,39 @@ static void devmap_managed_enable_put(struct dev_pagemap *pgmap)

    #ifdef CONFIG_DEVMAP_ACCESS_PROTECTION

    +/*
    + * Note; all devices which have asked for protections share the same key. The
    + * key may, or may not, have been provided by the core. If not, protection
    + * will be disabled. The key acquisition is attempted when the first ZONE
    + * DEVICE requests it and freed when all zones have been unmapped.
    + *
    + * Also this must be EXPORT_SYMBOL rather than EXPORT_SYMBOL_GPL because it is
    + * intended to be used in the kmap API.
    + */
    +DEFINE_STATIC_KEY_FALSE(dev_pgmap_protection_static_key);
    +EXPORT_SYMBOL(dev_pgmap_protection_static_key);
    +
    +static void devmap_protection_enable(void)
    +{
    + static_branch_inc(&dev_pgmap_protection_static_key);
    +}
    +
    +static void devmap_protection_disable(void)
    +{
    + static_branch_dec(&dev_pgmap_protection_static_key);
    +}
    +
    bool pgmap_protection_available(void)
    {
    return pks_available();
    }
    EXPORT_SYMBOL_GPL(pgmap_protection_available);

    +#else /* !CONFIG_DEVMAP_ACCESS_PROTECTION */
    +
    +static void devmap_protection_enable(void) { }
    +static void devmap_protection_disable(void) { }
    +
    #endif /* CONFIG_DEVMAP_ACCESS_PROTECTION */

    static void pgmap_array_delete(struct range *range)
    @@ -173,6 +200,9 @@ void memunmap_pages(struct dev_pagemap *pgmap)

    WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
    devmap_managed_enable_put(pgmap);
    +
    + if (pgmap->flags & PGMAP_PROTECTION)
    + devmap_protection_disable();
    }
    EXPORT_SYMBOL_GPL(memunmap_pages);

    @@ -319,6 +349,12 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
    if (WARN_ONCE(!nr_range, "nr_range must be specified\n"))
    return ERR_PTR(-EINVAL);

    + if (pgmap->flags & PGMAP_PROTECTION) {
    + if (!pgmap_protection_available())
    + return ERR_PTR(-EINVAL);
    + devmap_protection_enable();
    + }
    +
    switch (pgmap->type) {
    case MEMORY_DEVICE_PRIVATE:
    if (!IS_ENABLED(CONFIG_DEVICE_PRIVATE)) {
    --
    2.31.1
    \
     
     \ /
      Last update: 2022-01-27 18:56    [W:4.125 / U:0.464 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site