lkml.org 
[lkml]   [2022]   [Mar]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v6 4/8] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack
Hi Kalesh,

On Mon, Mar 14, 2022 at 8:04 PM Kalesh Singh <kaleshsingh@google.com> wrote:
>
> Map the stack pages in the flexible private VA range and allocate
> guard pages below the stack as unbacked VA space. The stack is aligned
> so that any valid stack address has PAGE_SHIFT bit as 1 - this is used
> for overflow detection (implemented in a subsequent patch in the series)
>
> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>

Tested-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>

Thanks,
/fuad


> ---
>
> Changes in v6:
> - Update call to pkvm_alloc_private_va_range() (return val and params)
>
> Changes in v5:
> - Use a single allocation for stack and guard pages to ensure they
> are contiguous, per Marc
>
> Changes in v4:
> - Replace IS_ERR_OR_NULL check with IS_ERR check now that
> pkvm_alloc_private_va_range() returns an error for null
> pointer, per Fuad
>
> Changes in v3:
> - Handle null ptr in IS_ERR_OR_NULL checks, per Mark
>
>
> arch/arm64/kvm/hyp/nvhe/setup.c | 31 ++++++++++++++++++++++++++++---
> 1 file changed, 28 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 27af337f9fea..e8d4ea2fcfa0 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -99,17 +99,42 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
> return ret;
>
> for (i = 0; i < hyp_nr_cpus; i++) {
> + struct kvm_nvhe_init_params *params = per_cpu_ptr(&kvm_init_params, i);
> + unsigned long hyp_addr;
> +
> start = (void *)kern_hyp_va(per_cpu_base[i]);
> end = start + PAGE_ALIGN(hyp_percpu_size);
> ret = pkvm_create_mappings(start, end, PAGE_HYP);
> if (ret)
> return ret;
>
> - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va;
> - start = end - PAGE_SIZE;
> - ret = pkvm_create_mappings(start, end, PAGE_HYP);
> + /*
> + * Allocate a contiguous HYP private VA range for the stack
> + * and guard page. The allocation is also aligned based on
> + * the order of its size.
> + */
> + ret = pkvm_alloc_private_va_range(PAGE_SIZE * 2, &hyp_addr);
> + if (ret)
> + return ret;
> +
> + /*
> + * Since the stack grows downwards, map the stack to the page
> + * at the higher address and leave the lower guard page
> + * unbacked.
> + *
> + * Any valid stack address now has the PAGE_SHIFT bit as 1
> + * and addresses corresponding to the guard page have the
> + * PAGE_SHIFT bit as 0 - this is used for overflow detection.
> + */
> + hyp_spin_lock(&pkvm_pgd_lock);
> + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + PAGE_SIZE,
> + PAGE_SIZE, params->stack_pa, PAGE_HYP);
> + hyp_spin_unlock(&pkvm_pgd_lock);
> if (ret)
> return ret;
> +
> + /* Update stack_hyp_va to end of the stack's private VA range */
> + params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE);
> }
>
> /*
> --
> 2.35.1.723.g4982287a31-goog
>

\
 
 \ /
  Last update: 2022-03-29 10:52    [W:0.141 / U:1.004 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site