lkml.org 
[lkml]   [2021]   [Apr]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC Part2 PATCH 02/30] x86/sev-snp: add RMP entry lookup helpers
From
Date
Hi Boris,


On 4/15/21 11:57 AM, Borislav Petkov wrote:
> On Wed, Mar 24, 2021 at 12:04:08PM -0500, Brijesh Singh wrote:
>> The lookup_page_in_rmptable() can be used by the host to read the RMP
>> entry for a given page. The RMP entry format is documented in PPR
>> section 2.1.5.2.
> I see
>
> Table 15-36. Fields of an RMP Entry
>
> in the APM.
>
> Which PPR do you mean? Also, you know where to put those documents,
> right?

This is from Family 19h Model 01h Rev B01. The processor which
introduces the SNP feature. Yes, I have already upload the PPR on the BZ.

The PPR is also available at AMD: https://www.amd.com/en/support/tech-docs


>> +/* RMP table entry format (PPR section 2.1.5.2) */
>> +struct __packed rmpentry {
>> + union {
>> + struct {
>> + uint64_t assigned:1;
>> + uint64_t pagesize:1;
>> + uint64_t immutable:1;
>> + uint64_t rsvd1:9;
>> + uint64_t gpa:39;
>> + uint64_t asid:10;
>> + uint64_t vmsa:1;
>> + uint64_t validated:1;
>> + uint64_t rsvd2:1;
>> + } info;
>> + uint64_t low;
>> + };
>> + uint64_t high;
>> +};
>> +
>> +typedef struct rmpentry rmpentry_t;
> Eww, a typedef. Why?
>
> struct rmpentry is just fine.


I guess I was trying to shorten the name. I am good with struct rmpentry;


>> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
>> index 39461b9cb34e..06394b6d56b2 100644
>> --- a/arch/x86/mm/mem_encrypt.c
>> +++ b/arch/x86/mm/mem_encrypt.c
>> @@ -34,6 +34,8 @@
>>
>> #include "mm_internal.h"
>>
> <--- Needs a comment here to explain the magic 0x4000 and the magic
> shift by 8.


All those magic numbers are documented in the PPR. APM does not provide
the offset of the entry inside the RMP table. This is where we need to
refer the PPR.

>> +#define rmptable_page_offset(x) (0x4000 + (((unsigned long) x) >> 8))
>> +
>> /*
>> * Since SME related variables are set early in the boot process they must
>> * reside in the .data section so as not to be zeroed out when the .bss
>> @@ -612,3 +614,33 @@ static int __init mem_encrypt_snp_init(void)
>> * SEV-SNP must be enabled across all CPUs, so make the initialization as a late initcall.
>> */
>> late_initcall(mem_encrypt_snp_init);
>> +
>> +rmpentry_t *lookup_page_in_rmptable(struct page *page, int *level)
> snp_lookup_page_in_rmptable()

Noted.


>> +{
>> + unsigned long phys = page_to_pfn(page) << PAGE_SHIFT;
>> + rmpentry_t *entry, *large_entry;
>> + unsigned long vaddr;
>> +
>> + if (!static_branch_unlikely(&snp_enable_key))
>> + return NULL;
>> +
>> + vaddr = rmptable_start + rmptable_page_offset(phys);
>> + if (WARN_ON(vaddr > rmptable_end))
> Do you really want to spew a warn on splat for each wrong vaddr? What
> for?
I guess I was using it during my development and there is no need for
it. I will remove it.
>
>> + return NULL;
>> +
>> + entry = (rmpentry_t *)vaddr;
>> +
>> + /*
>> + * Check if this page is covered by the large RMP entry. This is needed to get
>> + * the page level used in the RMP entry.
>> + *
> No need for a new line in the comment and no need for the "e.g." thing
> either.
>
> Also, s/the large RMP entry/a large RMP entry/g.
Noted/
>
>> + * e.g. if the page is covered by the large RMP entry then page size is set in the
>> + * base RMP entry.
>> + */
>> + vaddr = rmptable_start + rmptable_page_offset(phys & PMD_MASK);
>> + large_entry = (rmpentry_t *)vaddr;
>> + *level = rmpentry_pagesize(large_entry);
>> +
>> + return entry;
>> +}
>> +EXPORT_SYMBOL_GPL(lookup_page_in_rmptable);
> Exported for kvm?

The current user for this are: KVM, CCP and page fault handler.

-Brijesh

\
 
 \ /
  Last update: 2021-04-15 20:09    [W:2.355 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site