lkml.org 
[lkml]   [2022]   [Oct]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[tip: x86/core] crypto: x86/serpent: Remove redundant alignments
    The following commit has been merged into the x86/core branch of tip:

    Commit-ID: 8b44221671ec45d725a4558ff7aa5ea90ecfc885
    Gitweb: https://git.kernel.org/tip/8b44221671ec45d725a4558ff7aa5ea90ecfc885
    Author: Thomas Gleixner <tglx@linutronix.de>
    AuthorDate: Thu, 15 Sep 2022 13:10:55 +02:00
    Committer: Peter Zijlstra <peterz@infradead.org>
    CommitterDate: Mon, 17 Oct 2022 16:41:01 +02:00

    crypto: x86/serpent: Remove redundant alignments

    SYM_FUNC_START*() and friends already imply alignment, remove custom
    alignment hacks to make code consistent. This prepares for future
    function call ABI changes.

    Also, with having pushed the function alignment to 16 bytes, this
    custom alignment is completely superfluous.

    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20220915111144.558544791@infradead.org
    ---
    arch/x86/crypto/serpent-avx-x86_64-asm_64.S | 2 --
    arch/x86/crypto/serpent-avx2-asm_64.S | 2 --
    2 files changed, 4 deletions(-)

    diff --git a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
    index 82f2313..97e2836 100644
    --- a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
    +++ b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
    @@ -550,7 +550,6 @@
    #define write_blocks(x0, x1, x2, x3, t0, t1, t2) \
    transpose_4x4(x0, x1, x2, x3, t0, t1, t2)

    -.align 8
    SYM_FUNC_START_LOCAL(__serpent_enc_blk8_avx)
    /* input:
    * %rdi: ctx, CTX
    @@ -604,7 +603,6 @@ SYM_FUNC_START_LOCAL(__serpent_enc_blk8_avx)
    RET;
    SYM_FUNC_END(__serpent_enc_blk8_avx)

    -.align 8
    SYM_FUNC_START_LOCAL(__serpent_dec_blk8_avx)
    /* input:
    * %rdi: ctx, CTX
    diff --git a/arch/x86/crypto/serpent-avx2-asm_64.S b/arch/x86/crypto/serpent-avx2-asm_64.S
    index 8ea34c9..6d60c50 100644
    --- a/arch/x86/crypto/serpent-avx2-asm_64.S
    +++ b/arch/x86/crypto/serpent-avx2-asm_64.S
    @@ -550,7 +550,6 @@
    #define write_blocks(x0, x1, x2, x3, t0, t1, t2) \
    transpose_4x4(x0, x1, x2, x3, t0, t1, t2)

    -.align 8
    SYM_FUNC_START_LOCAL(__serpent_enc_blk16)
    /* input:
    * %rdi: ctx, CTX
    @@ -604,7 +603,6 @@ SYM_FUNC_START_LOCAL(__serpent_enc_blk16)
    RET;
    SYM_FUNC_END(__serpent_enc_blk16)

    -.align 8
    SYM_FUNC_START_LOCAL(__serpent_dec_blk16)
    /* input:
    * %rdi: ctx, CTX
    \
     
     \ /
      Last update: 2022-10-17 16:58    [W:4.273 / U:0.632 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site