lkml.org 
[lkml]   [2018]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 2/3] x86/entry/32: Check for VM86 mode in slow-path check
    Date
    From: Joerg Roedel <jroedel@suse.de>

    The SWITCH_TO_KERNEL_STACK macro only checks for CPL == 0 to
    go down the slow and paranoid entry path. The problem is
    that this check also returns true when coming from VM86
    mode. This is not a problem by itself, as the paranoid path
    handles VM86 stack-frames just fine, but it is not necessary
    as the normal code path handles VM86 mode as well (and
    faster).

    Extend the check to include VM86 mode. This also makes an
    optimization of the paranoid path possible.

    Signed-off-by: Joerg Roedel <jroedel@suse.de>
    ---
    arch/x86/entry/entry_32.S | 12 ++++++++++--
    1 file changed, 10 insertions(+), 2 deletions(-)

    diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
    index 010cdb4..2767c62 100644
    --- a/arch/x86/entry/entry_32.S
    +++ b/arch/x86/entry/entry_32.S
    @@ -414,8 +414,16 @@
    andl $(0x0000ffff), PT_CS(%esp)

    /* Special case - entry from kernel mode via entry stack */
    - testl $SEGMENT_RPL_MASK, PT_CS(%esp)
    - jz .Lentry_from_kernel_\@
    +#ifdef CONFIG_VM86
    + movl PT_EFLAGS(%esp), %ecx # mix EFLAGS and CS
    + movb PT_CS(%esp), %cl
    + andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %ecx
    +#else
    + movl PT_CS(%esp), %ecx
    + andl $SEGMENT_RPL_MASK, %ecx
    +#endif
    + cmpl $USER_RPL, %ecx
    + jb .Lentry_from_kernel_\@

    /* Bytes to copy */
    movl $PTREGS_SIZE, %ecx
    --
    2.7.4
    \
     
     \ /
      Last update: 2018-07-20 18:23    [W:3.185 / U:1.808 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site