lkml.org 
[lkml]   [2018]   [Sep]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v13 10/13] x86/sgx: Add sgx_einit() for initializing enclaves
On Fri, Aug 31, 2018 at 11:15:09AM -0700, Sean Christopherson wrote:
> On Fri, Aug 31, 2018 at 03:17:03PM +0300, Jarkko Sakkinen wrote:
> > On Wed, Aug 29, 2018 at 07:33:54AM +0000, Huang, Kai wrote:
> > > [snip..]
> > >
> > > > > >
> > > > > > @@ -38,6 +39,18 @@ static LIST_HEAD(sgx_active_page_list); static
> > > > > > DEFINE_SPINLOCK(sgx_active_page_list_lock);
> > > > > > static struct task_struct *ksgxswapd_tsk; static
> > > > > > DECLARE_WAIT_QUEUE_HEAD(ksgxswapd_waitq);
> > > > > > +static struct notifier_block sgx_pm_notifier; static u64
> > > > > > +sgx_pm_cnt;
> > > > > > +
> > > > > > +/* The cache for the last known values of IA32_SGXLEPUBKEYHASHx
> > > > > > +MSRs
> > > > > > for each
> > > > > > + * CPU. The entries are initialized when they are first used by
> > > > > > sgx_einit().
> > > > > > + */
> > > > > > +struct sgx_lepubkeyhash {
> > > > > > + u64 msrs[4];
> > > > > > + u64 pm_cnt;
> > > > >
> > > > > May I ask why do we need pm_cnt here? In fact why do we need suspend
> > > > > staff (namely, sgx_pm_cnt above, and related code in this patch) here
> > > > > in this patch? From the patch commit message I don't see why we need
> > > > > PM staff here. Please give comment why you need PM staff, or you may
> > > > > consider to split the PM staff to another patch.
> > > >
> > > > Refining the commit message probably makes more sense because without PM
> > > > code sgx_einit() would be broken. The MSRs have been reset after waking up.
> > > >
> > > > Some kind of counter is required to keep track of the power cycle. When going
> > > > to sleep the sgx_pm_cnt is increased. sgx_einit() compares the current value of
> > > > the global count to the value in the cache entry to see whether we are in a new
> > > > power cycle.
> > >
> > > You mean reset to Intel default? I think we can also just reset the
> > > cached MSR values on each power cycle, which would be simpler, IMHO?
> >
> > I don't really see that much difference in the complexity.
>
> Tracking the validity of the cache means we're hosed if we miss any
> condition that causes the MSRs to be reset. I think we're better off
> assuming the cache can be stale at any time, i.e. don't track power
> cyles and instead handle EINIT failure due to INVALID_TOKEN by writing
> the cache+MSRs with the desired hash and retrying EINIT. EINIT is
> interruptible and its latency is extremely variable in any case, e.g.
> tens of thousands of cycles, so this rarely-hit "slow path" probably
> wouldn't affect the worst case latency of EINIT.

Sounds a good refiniment. Pretty good solution to heal from host sleep
on the guest VM and then there is no need for driver changes.

/Jarkko

\
 
 \ /
  Last update: 2018-09-03 21:20    [W:0.326 / U:0.004 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site