lkml.org 
[lkml]   [2023]   [May]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 11/12] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
On Mon, Apr 10, 2023 at 03:59:35PM -0700, Chang S. Bae wrote:
> [PATCH v6 11/12] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions

Thanks for dropping the unnecessary modes of operation (CBC, ECB, CTR). It
simplified the patchset quite a bit!

Now that only AES-XTS is included, can you please also merge this patch with the
following patch? As-is, this patch is misleading since it doesn't actually add
"support" for anything at all. It actually just adds an unfinished AES-XTS
implementation, which patch 12 then finishes. I assume that the current
patchset organization is left over from when you were trying to support multiple
modes of operation. IMO, it would be much easier to review if patches 11-12
were merged into one patch that adds the new AES-XTS implementation.

> For disk encryption, storage bandwidth may be the bottleneck
> before encryption bandwidth, but the potential performance difference is
> why AES-KL is advertised as a distinct cipher in /proc/crypto rather than
> the kernel transparently replacing AES-NI usage with AES-KL.

This does not correctly describe what is going on. Actually, this patchset
registers the AES-KL XTS algorithm with the usual name "xts(aes)". So, it can
potentially be used by any AES-XTS user. It seems that you're actually relying
on the algorithm priorities to prioritize AES-NI, as you've assigned priority
200 to AES-KL, whereas AES-NI has priority 401. Is that what you intend, and if
so can you please update your explanation to properly explain this?

The alternative would be to use a unique algorithm name, such as
"keylocker-xts(aes)". I'm not sure that would be better, given that the
algorithms are compatible. However, that actually would seem to match the
explanation you gave more closely, so perhaps that's what you actually intended?

> diff --git a/arch/x86/crypto/aeskl-intel_asm.S b/arch/x86/crypto/aeskl-intel_asm.S
[...]
> +#ifdef __x86_64__
> +#define AREG %rax
> +#define HANDLEP %rdi
> +#define OUTP %rsi
> +#define KLEN %r9d
> +#define INP %rdx
> +#define T1 %r10
> +#define LEN %rcx
> +#define IVP %r8
> +#else
> +#define AREG %eax
> +#define HANDLEP %edi
> +#define OUTP AREG
> +#define KLEN %ebx
> +#define INP %edx
> +#define T1 %ecx
> +#define LEN %esi
> +#define IVP %ebp
> +#endif

I strongly recommend skipping the 32-bit support, as it's unlikely to be worth
the effort.

And actually, aeskl-intel_glue.c only registers the algorithm for 64-bit
anyway... So the 32-bit code paths are untested dead code.

> +static inline int aeskl_enc(const void *ctx, u8 *out, const u8 *in)
> +{
> + if (unlikely(keylength(ctx) == AES_KEYSIZE_192))
> + return -EINVAL;
> + else if (!valid_keylocker())
> + return -ENODEV;
> + else if (_aeskl_enc(ctx, out, in))
> + return -EINVAL;
> + else
> + return 0;
> +}
> +
> +static inline int aeskl_dec(const void *ctx, u8 *out, const u8 *in)
> +{
> + if (unlikely(keylength(ctx) == AES_KEYSIZE_192))
> + return -EINVAL;
> + else if (!valid_keylocker())
> + return -ENODEV;
> + else if (_aeskl_dec(ctx, out, in))
> + return -EINVAL;
> + else
> + return 0;
> +}

I don't think the above two functions should exist. keylength() and
valid_keylocker() should be checked before calling xts_crypt_common(), and the
assembly functions should just return the correct error code (-EINVAL,
apparently) instead of an unspecified nonzero value. That would leave nothing
for a wrapper function to do.

Note: if you take this suggestion, the assembly functions would need to use
SYM_TYPED_FUNC_START instead of SYM_FUNC_START.

- Eric

\
 
 \ /
  Last update: 2023-05-06 02:02    [W:0.444 / U:1.876 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site