lkml.org 
[lkml]   [2022]   [Jun]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH Part2 v6 07/49] x86/sev: Invalid pages from direct map when adding it to RMP table
    Date
    From: Brijesh Singh <brijesh.singh@amd.com>

    The integrity guarantee of SEV-SNP is enforced through the RMP table.
    The RMP is used with standard x86 and IOMMU page tables to enforce memory
    restrictions and page access rights. The RMP check is enforced as soon as
    SEV-SNP is enabled globally in the system. When hardware encounters an
    RMP checks failure, it raises a page-fault exception.

    The rmp_make_private() and rmp_make_shared() helpers are used to add
    or remove the pages from the RMP table. Improve the rmp_make_private() to
    invalid state so that pages cannot be used in the direct-map after its
    added in the RMP table, and restore to its default valid permission after
    the pages are removed from the RMP table.

    Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
    ---
    arch/x86/kernel/sev.c | 61 ++++++++++++++++++++++++++++++++++++++++++-
    1 file changed, 60 insertions(+), 1 deletion(-)

    diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
    index f6c64a722e94..734cddd837f5 100644
    --- a/arch/x86/kernel/sev.c
    +++ b/arch/x86/kernel/sev.c
    @@ -2451,10 +2451,42 @@ int psmash(u64 pfn)
    }
    EXPORT_SYMBOL_GPL(psmash);

    +static int restore_direct_map(u64 pfn, int npages)
    +{
    + int i, ret = 0;
    +
    + for (i = 0; i < npages; i++) {
    + ret = set_direct_map_default_noflush(pfn_to_page(pfn + i));
    + if (ret)
    + goto cleanup;
    + }
    +
    +cleanup:
    + WARN(ret > 0, "Failed to restore direct map for pfn 0x%llx\n", pfn + i);
    + return ret;
    +}
    +
    +static int invalid_direct_map(unsigned long pfn, int npages)
    +{
    + int i, ret = 0;
    +
    + for (i = 0; i < npages; i++) {
    + ret = set_direct_map_invalid_noflush(pfn_to_page(pfn + i));
    + if (ret)
    + goto cleanup;
    + }
    +
    + return 0;
    +
    +cleanup:
    + restore_direct_map(pfn, i);
    + return ret;
    +}
    +
    static int rmpupdate(u64 pfn, struct rmpupdate *val)
    {
    unsigned long paddr = pfn << PAGE_SHIFT;
    - int ret;
    + int ret, level, npages;

    if (!pfn_valid(pfn))
    return -EINVAL;
    @@ -2462,11 +2494,38 @@ static int rmpupdate(u64 pfn, struct rmpupdate *val)
    if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
    return -ENXIO;

    + level = RMP_TO_X86_PG_LEVEL(val->pagesize);
    + npages = page_level_size(level) / PAGE_SIZE;
    +
    + /*
    + * If page is getting assigned in the RMP table then unmap it from the
    + * direct map.
    + */
    + if (val->assigned) {
    + if (invalid_direct_map(pfn, npages)) {
    + pr_err("Failed to unmap pfn 0x%llx pages %d from direct_map\n",
    + pfn, npages);
    + return -EFAULT;
    + }
    + }
    +
    /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */
    asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE"
    : "=a"(ret)
    : "a"(paddr), "c"((unsigned long)val)
    : "memory", "cc");
    +
    + /*
    + * Restore the direct map after the page is removed from the RMP table.
    + */
    + if (!ret && !val->assigned) {
    + if (restore_direct_map(pfn, npages)) {
    + pr_err("Failed to map pfn 0x%llx pages %d in direct_map\n",
    + pfn, npages);
    + return -EFAULT;
    + }
    + }
    +
    return ret;
    }

    --
    2.25.1
    \
     
     \ /
      Last update: 2022-06-21 01:06    [W:3.573 / U:1.044 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site