lkml.org 
[lkml]   [2019]   [Apr]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 24/27] x86/fpu: Add a fastpath to __fpu__restore_sig()
    On Wed, Apr 03, 2019 at 06:41:53PM +0200, Sebastian Andrzej Siewior wrote:
    > The previous commits refactor the restoration of the FPU registers so
    > that they can be loaded from in-kernel memory. This overhead can be
    > avoided if the load can be performed without a pagefault.
    >
    > Attempt to restore FPU registers by invoking
    > copy_user_to_fpregs_zeroing(). If it fails try the slowpath which can handle
    > pagefaults.
    >
    > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    > ---
    > arch/x86/kernel/fpu/signal.c | 16 ++++++++++++++--
    > 1 file changed, 14 insertions(+), 2 deletions(-)
    >
    > diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
    > index a5b086ec426a5..f20e1d1fffa29 100644
    > --- a/arch/x86/kernel/fpu/signal.c
    > +++ b/arch/x86/kernel/fpu/signal.c
    > @@ -242,10 +242,10 @@ sanitize_restored_xstate(union fpregs_state *state,
    > /*
    > * Restore the extended state if present. Otherwise, restore the FP/SSE state.
    > */
    > -static inline int copy_user_to_fpregs_zeroing(void __user *buf, u64 xbv, int fx_only)
    > +static int copy_user_to_fpregs_zeroing(void __user *buf, u64 xbv, int fx_only)
    > {
    > if (use_xsave()) {
    > - if ((unsigned long)buf % 64 || fx_only) {
    > + if (fx_only) {
    > u64 init_bv = xfeatures_mask & ~XFEATURE_MASK_FPSSE;
    > copy_kernel_to_xregs(&init_fpstate.xsave, init_bv);
    > return copy_user_to_fxregs(buf);
    > @@ -327,7 +327,19 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
    > if (ret)
    > goto err_out;
    > envp = &env;
    > + } else {

    I've added here:

    + /*
    + * Attempt to restore the FPU registers directly from user
    + * memory. For that to succeed, the user accesses cannot cause
    + * page faults. If they do, fall back to the slow path below,
    + * going through the kernel buffer.
    + */

    so that it is clear what's happening.

    This function is doing gazillion things again ;-\

    --
    Regards/Gruss,
    Boris.

    Good mailing practices for 400: avoid top-posting and trim the reply.

    \
     
     \ /
      Last update: 2019-04-12 19:19    [W:5.664 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site