lkml.org 
[lkml]   [2023]   [Jan]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] kunit: memcpy: Split slow memcpy tests into MEMCPY_SLOW_KUNIT_TEST
Date
Since the long memcpy tests may stall a system for tens of seconds
in virtualized architecture environments, split those tests off under
CONFIG_MEMCPY_SLOW_KUNIT_TEST so they can be separately disabled.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/lkml/20221226195206.GA2626419@roeck-us.net
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-and-tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: David Gow <davidgow@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: linux-hardening@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
v2: fix tristate to bool
v1: https://lore.kernel.org/lkml/20230107040203.never.112-kees@kernel.org
---
lib/Kconfig.debug | 9 +++++++++
lib/memcpy_kunit.c | 15 ++++++++++++---
2 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index c2c78d0e761c..f90637171453 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -2621,6 +2621,15 @@ config MEMCPY_KUNIT_TEST

If unsure, say N.

+config MEMCPY_SLOW_KUNIT_TEST
+ bool "Include exhaustive memcpy tests" if !KUNIT_ALL_TESTS
+ depends on MEMCPY_KUNIT_TEST
+ default KUNIT_ALL_TESTS
+ help
+ Some memcpy tests are quite exhaustive in checking for overlaps
+ and bit ranges. These can be very slow, so they are split out
+ as a separate config.
+
config IS_SIGNED_TYPE_KUNIT_TEST
tristate "Test is_signed_type() macro" if !KUNIT_ALL_TESTS
depends on KUNIT
diff --git a/lib/memcpy_kunit.c b/lib/memcpy_kunit.c
index 89128551448d..5a545e1b5dbb 100644
--- a/lib/memcpy_kunit.c
+++ b/lib/memcpy_kunit.c
@@ -307,8 +307,12 @@ static void set_random_nonzero(struct kunit *test, u8 *byte)
}
}

-static void init_large(struct kunit *test)
+static int init_large(struct kunit *test)
{
+ if (!IS_ENABLED(CONFIG_MEMCPY_SLOW_KUNIT_TEST)) {
+ kunit_skip(test, "Slow test skipped. Enable with CONFIG_MEMCPY_SLOW_KUNIT_TEST=y");
+ return -EBUSY;
+ }

/* Get many bit patterns. */
get_random_bytes(large_src, ARRAY_SIZE(large_src));
@@ -319,6 +323,8 @@ static void init_large(struct kunit *test)

/* Explicitly zero the entire destination. */
memset(large_dst, 0, ARRAY_SIZE(large_dst));
+
+ return 0;
}

/*
@@ -327,7 +333,9 @@ static void init_large(struct kunit *test)
*/
static void copy_large_test(struct kunit *test, bool use_memmove)
{
- init_large(test);
+
+ if (init_large(test))
+ return;

/* Copy a growing number of non-overlapping bytes ... */
for (int bytes = 1; bytes <= ARRAY_SIZE(large_src); bytes++) {
@@ -472,7 +480,8 @@ static void memmove_overlap_test(struct kunit *test)
static const int bytes_start = 1;
static const int bytes_end = ARRAY_SIZE(large_src) + 1;

- init_large(test);
+ if (init_large(test))
+ return;

/* Copy a growing number of overlapping bytes ... */
for (int bytes = bytes_start; bytes < bytes_end;
--
2.34.1
\
 
 \ /
  Last update: 2023-03-26 23:41    [W:0.352 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site