lkml.org 
[lkml]   [2021]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[RFC v2 26/32] x86/mm: Move force_dma_unencrypted() to common code
    Date
    From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

    Intel TDX doesn't allow VMM to access guest memory. Any memory
    that is required for communication with VMM must be shared
    explicitly by setting the bit in page table entry. And, after
    setting the shared bit, the conversion must be completed with
    MapGPA TDVMALL. The call informs VMM about the conversion and
    makes it remove the GPA from the S-EPT mapping. The shared
    memory is similar to unencrypted memory in AMD SME/SEV terminology
    but the underlying process of sharing/un-sharing the memory is
    different for Intel TDX guest platform.

    SEV assumes that I/O devices can only do DMA to "decrypted"
    physical addresses without the C-bit set.  In order for the CPU
    to interact with this memory, the CPU needs a decrypted mapping.
    To add this support, AMD SME code forces force_dma_unencrypted()
    to return true for platforms that support AMD SEV feature. It will
    be used for DMA memory allocation API to trigger
    set_memory_decrypted() for platforms that support AMD SEV feature.

    TDX is similar.  TDX architecturally prevents access to private
    guest memory by anything other than the guest itself. This means that
    any DMA buffers must be shared.

    So move force_dma_unencrypted() out of AMD specific code.
       
    It will be modified to return true for Intel TDX guest platform,
    similar to AMD SEV feature.

    Introduce new config option X86_MEM_ENCRYPT_COMMON that has to be
    selected by all x86 memory encryption features. This will be
    selected by both AMD SEV and Intel TDX guest config options.

    This is preparation for TDX changes in DMA code and it has not
    functional change.    

    Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Reviewed-by: Andi Kleen <ak@linux.intel.com>
    Reviewed-by: Tony Luck <tony.luck@intel.com>
    Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
    ---
    arch/x86/Kconfig | 8 +++++--
    arch/x86/mm/Makefile | 2 ++
    arch/x86/mm/mem_encrypt.c | 30 -------------------------
    arch/x86/mm/mem_encrypt_common.c | 38 ++++++++++++++++++++++++++++++++
    4 files changed, 46 insertions(+), 32 deletions(-)
    create mode 100644 arch/x86/mm/mem_encrypt_common.c

    diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
    index 932e6d759ba7..67f99bf27729 100644
    --- a/arch/x86/Kconfig
    +++ b/arch/x86/Kconfig
    @@ -1529,14 +1529,18 @@ config X86_CPA_STATISTICS
    helps to determine the effectiveness of preserving large and huge
    page mappings when mapping protections are changed.

    +config X86_MEM_ENCRYPT_COMMON
    + select ARCH_HAS_FORCE_DMA_UNENCRYPTED
    + select DYNAMIC_PHYSICAL_MASK
    + def_bool n
    +
    config AMD_MEM_ENCRYPT
    bool "AMD Secure Memory Encryption (SME) support"
    depends on X86_64 && CPU_SUP_AMD
    select DMA_COHERENT_POOL
    - select DYNAMIC_PHYSICAL_MASK
    select ARCH_USE_MEMREMAP_PROT
    - select ARCH_HAS_FORCE_DMA_UNENCRYPTED
    select INSTRUCTION_DECODER
    + select X86_MEM_ENCRYPT_COMMON
    help
    Say yes to enable support for the encryption of system memory.
    This requires an AMD processor that supports Secure Memory
    diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
    index 5864219221ca..b31cb52bf1bd 100644
    --- a/arch/x86/mm/Makefile
    +++ b/arch/x86/mm/Makefile
    @@ -52,6 +52,8 @@ obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o
    obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
    obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o

    +obj-$(CONFIG_X86_MEM_ENCRYPT_COMMON) += mem_encrypt_common.o
    +
    obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o
    obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o
    obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o
    diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
    index ae78cef79980..6f713c6a32b2 100644
    --- a/arch/x86/mm/mem_encrypt.c
    +++ b/arch/x86/mm/mem_encrypt.c
    @@ -15,10 +15,6 @@
    #include <linux/dma-direct.h>
    #include <linux/swiotlb.h>
    #include <linux/mem_encrypt.h>
    -#include <linux/device.h>
    -#include <linux/kernel.h>
    -#include <linux/bitops.h>
    -#include <linux/dma-mapping.h>

    #include <asm/tlbflush.h>
    #include <asm/fixmap.h>
    @@ -390,32 +386,6 @@ bool noinstr sev_es_active(void)
    return sev_status & MSR_AMD64_SEV_ES_ENABLED;
    }

    -/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
    -bool force_dma_unencrypted(struct device *dev)
    -{
    - /*
    - * For SEV, all DMA must be to unencrypted addresses.
    - */
    - if (sev_active())
    - return true;
    -
    - /*
    - * For SME, all DMA must be to unencrypted addresses if the
    - * device does not support DMA to addresses that include the
    - * encryption mask.
    - */
    - if (sme_active()) {
    - u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask));
    - u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask,
    - dev->bus_dma_limit);
    -
    - if (dma_dev_mask <= dma_enc_mask)
    - return true;
    - }
    -
    - return false;
    -}
    -
    void __init mem_encrypt_free_decrypted_mem(void)
    {
    unsigned long vaddr, vaddr_end, npages;
    diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c
    new file mode 100644
    index 000000000000..964e04152417
    --- /dev/null
    +++ b/arch/x86/mm/mem_encrypt_common.c
    @@ -0,0 +1,38 @@
    +// SPDX-License-Identifier: GPL-2.0-only
    +/*
    + * AMD Memory Encryption Support
    + *
    + * Copyright (C) 2016 Advanced Micro Devices, Inc.
    + *
    + * Author: Tom Lendacky <thomas.lendacky@amd.com>
    + */
    +
    +#include <linux/mm.h>
    +#include <linux/mem_encrypt.h>
    +#include <linux/dma-mapping.h>
    +
    +/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
    +bool force_dma_unencrypted(struct device *dev)
    +{
    + /*
    + * For SEV, all DMA must be to unencrypted/shared addresses.
    + */
    + if (sev_active())
    + return true;
    +
    + /*
    + * For SME, all DMA must be to unencrypted addresses if the
    + * device does not support DMA to addresses that include the
    + * encryption mask.
    + */
    + if (sme_active()) {
    + u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask));
    + u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask,
    + dev->bus_dma_limit);
    +
    + if (dma_dev_mask <= dma_enc_mask)
    + return true;
    + }
    +
    + return false;
    +}
    --
    2.25.1
    \
     
     \ /
      Last update: 2021-04-26 20:05    [W:4.129 / U:0.040 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site