Messages in this thread | | | Date | Fri, 12 May 2023 14:49:11 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH] x86/alternative: Optimize returns patching |
| |
On Fri, May 12, 2023 at 02:09:52PM +0200, Borislav Petkov wrote: > From: "Borislav Petkov (AMD)" <bp@alien8.de> > > Instead of decoding each instruction in the return sites range only to > realize that that return site is a jump to the default return thunk > which is needed - X86_FEATURE_RETHUNK is enabled - lift that check > before the loop and get rid of that loop overhead. > > Add comments about what gets patched, while at it. > > Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> --- > arch/x86/kernel/alternative.c | 13 ++++++++++--- > 1 file changed, 10 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c > index b78d55f0dfad..3bb0a5f61e8c 100644 > --- a/arch/x86/kernel/alternative.c > +++ b/arch/x86/kernel/alternative.c > @@ -693,13 +693,12 @@ static int patch_return(void *addr, struct insn *insn, u8 *bytes) > { > int i = 0; > > + /* Patch the custom return thunks... */ > if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) { > - if (x86_return_thunk == __x86_return_thunk) > - return -1; > - > i = JMP32_INSN_SIZE; > __text_gen_insn(bytes, JMP32_INSN_OPCODE, addr, x86_return_thunk, i); > } else { > + /* ... or patch them out if not needed. */ > bytes[i++] = RET_INSN_OPCODE; > } > > @@ -712,6 +711,14 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end) > { > s32 *s; > > + /* > + * Do not patch out the default return thunks if those needed are the > + * ones generated by the compiler. > + */ > + if (cpu_feature_enabled(X86_FEATURE_RETHUNK) && > + (x86_return_thunk == __x86_return_thunk)) > + return; > + > for (s = start; s < end; s++) { > void *dest = NULL, *addr = (void *)s + *s; > struct insn insn; > -- > 2.35.1 >
| |