lkml.org 
[lkml]   [2022]   [Jul]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.15 35/89] x86/kvm: Fix SETcc emulation for return thunks
    Date
    From: Peter Zijlstra <peterz@infradead.org>

    commit af2e140f34208a5dfb6b7a8ad2d56bda88f0524d upstream.

    Prepare the SETcc fastop stuff for when RET can be larger still.

    The tricky bit here is that the expressions should not only be
    constant C expressions, but also absolute GAS expressions. This means
    no ?: and 'true' is ~0.

    Also ensure em_setcc() has the same alignment as the actual FOP_SETCC()
    ops, this ensures there cannot be an alignment hole between em_setcc()
    and the first op.

    Additionally, add a .skip directive to the FOP_SETCC() macro to fill
    any remaining space with INT3 traps; however the primary purpose of
    this directive is to generate AS warnings when the remaining space
    goes negative. Which is a very good indication the alignment magic
    went side-ways.

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
    Signed-off-by: Borislav Petkov <bp@suse.de>
    [cascardo: ignore ENDBR when computing SETCC_LENGTH]
    [cascardo: conflict fixup]
    Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    arch/x86/kvm/emulate.c | 26 ++++++++++++++------------
    1 file changed, 14 insertions(+), 12 deletions(-)

    --- a/arch/x86/kvm/emulate.c
    +++ b/arch/x86/kvm/emulate.c
    @@ -321,13 +321,15 @@ static int fastop(struct x86_emulate_ctx
    #define FOP_RET(name) \
    __FOP_RET(#name)

    -#define FOP_START(op) \
    +#define __FOP_START(op, align) \
    extern void em_##op(struct fastop *fake); \
    asm(".pushsection .text, \"ax\" \n\t" \
    ".global em_" #op " \n\t" \
    - ".align " __stringify(FASTOP_SIZE) " \n\t" \
    + ".align " __stringify(align) " \n\t" \
    "em_" #op ":\n\t"

    +#define FOP_START(op) __FOP_START(op, FASTOP_SIZE)
    +
    #define FOP_END \
    ".popsection")

    @@ -431,15 +433,14 @@ static int fastop(struct x86_emulate_ctx
    /*
    * Depending on .config the SETcc functions look like:
    *
    - * SETcc %al [3 bytes]
    - * RET [1 byte]
    - * INT3 [1 byte; CONFIG_SLS]
    - *
    - * Which gives possible sizes 4 or 5. When rounded up to the
    - * next power-of-two alignment they become 4 or 8.
    + * SETcc %al [3 bytes]
    + * RET | JMP __x86_return_thunk [1,5 bytes; CONFIG_RETPOLINE]
    + * INT3 [1 byte; CONFIG_SLS]
    */
    -#define SETCC_LENGTH (4 + IS_ENABLED(CONFIG_SLS))
    -#define SETCC_ALIGN (4 << IS_ENABLED(CONFIG_SLS))
    +#define RET_LENGTH (1 + (4 * IS_ENABLED(CONFIG_RETPOLINE)) + \
    + IS_ENABLED(CONFIG_SLS))
    +#define SETCC_LENGTH (3 + RET_LENGTH)
    +#define SETCC_ALIGN (4 << ((SETCC_LENGTH > 4) & 1) << ((SETCC_LENGTH > 8) & 1))
    static_assert(SETCC_LENGTH <= SETCC_ALIGN);

    #define FOP_SETCC(op) \
    @@ -447,13 +448,14 @@ static_assert(SETCC_LENGTH <= SETCC_ALIG
    ".type " #op ", @function \n\t" \
    #op ": \n\t" \
    #op " %al \n\t" \
    - __FOP_RET(#op)
    + __FOP_RET(#op) \
    + ".skip " __stringify(SETCC_ALIGN) " - (.-" #op "), 0xcc \n\t"

    asm(".pushsection .fixup, \"ax\"\n"
    "kvm_fastop_exception: xor %esi, %esi; " ASM_RET
    ".popsection");

    -FOP_START(setcc)
    +__FOP_START(setcc, SETCC_ALIGN)
    FOP_SETCC(seto)
    FOP_SETCC(setno)
    FOP_SETCC(setc)

    \
     
     \ /
      Last update: 2022-07-22 11:24    [W:4.126 / U:0.284 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site