lkml.org 
[lkml]   [2020]   [Aug]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patches in this message
/
SubjectRe: [PATCH v4 00/10] Function Granular KASLR
From
Date
Hi,

> On Fri, 17 Jul 2020, Kristen Carlson Accardi wrote:
>
>> Function Granular Kernel Address Space Layout Randomization (fgkaslr)
>> ---------------------------------------------------------------------
>>
>> This patch set is an implementation of finer grained kernel address space
>> randomization. It rearranges your kernel code at load time
>> on a per-function level granularity, with only around a second added to
>> boot time.
>
> [...]

>> Modules
>> -------
>> Modules are randomized similarly to the rest of the kernel by shuffling
>> the sections at load time prior to moving them into memory. The module must
>> also have been build with the -ffunction-sections compiler option.

It seems, a couple more adjustments are needed in the module loader code.

With function granular KASLR, modules will have lots of ELF sections due
to -ffunction-sections.
On my x86_64 system with kernel 5.8-rc7 with FG KASLR patches, for
example, xfs.ko has 4849 ELF sections total, 2428 of these are loaded
and shown in /sys/module/xfs/sections/.

There are at least 2 places where high-order memory allocations might
happen during module loading. Such allocations may fail if memory is
fragmented, while physically contiguous memory areas are not really
needed there. I suggest to switch to kvmalloc/kvfree there.

1. kernel/module.c, randomize_text():
Elf_Shdr **text_list;
...
int max_sections = info->hdr->e_shnum;
...
text_list = kmalloc_array(max_sections, sizeof(*text_list), GFP_KERNEL);

The size of the allocated memory area is (8 * total_number_of_sections),
if I understand it right, which is 38792 for xfs.ko, a 4th order allocation.

2. kernel/module.c, mod_sysfs_setup() => add_sect_attrs().

This allocation can be larger than the first one.

We found this issue with livepatch modules some time ago (these modules
are already built with -ffunction-sections) [1], but, with FG KASLR, it
affects all kernel modules. Large ones like xfs.ko, btrfs.ko, etc.,
could suffer the most from it.

When a module is loaded sysfs attributes are created for its ELF
sections (visible as /sys/module/<module_name>/sections/*). and contain
the start addresses of these ELF sections. A single memory chunk is
allocated
for all these:

size[0] = ALIGN(sizeof(*sect_attrs)
+ nloaded * sizeof(sect_attrs->attrs[0]),
sizeof(sect_attrs->grp.attrs[0]));
size[1] = (nloaded + 1) * sizeof(sect_attrs->grp.attrs[0]);
sect_attrs = kzalloc(size[0] + size[1], GFP_KERNEL);

'nloaded' is the number of loaded ELF section in the module.

For the kernel 5.8-rc7 on my system, the total size is 56 + 72 *
nloaded, which is 174872 for xfs.ko, 43 pages, 6th order allocation.

I enabled 'mm_page_alloc' tracepoint with filter 'order > 3' to confirm
the issue and, indeed, got these two allocations when modprobe'ing xfs:
----------------------------
/sys/kernel/debug/tracing/trace:
modprobe-1509 <...>: mm_page_alloc: <...> order=4
migratetype=0 gfp_flags=GFP_KERNEL|__GFP_COMP
modprobe-1509 <stack trace>
=> __alloc_pages_nodemask
=> alloc_pages_current
=> kmalloc_order
=> kmalloc_order_trace
=> __kmalloc
=> load_module

modprobe-1509 <...>: mm_page_alloc: <...> order=6
migratetype=0 gfp_flags=GFP_KERNEL|__GFP_COMP|__GFP_ZERO
modprobe-1509 <stack trace>
=> __alloc_pages_nodemask
=> alloc_pages_current
=> kmalloc_order
=> kmalloc_order_trace
=> __kmalloc
=> mod_sysfs_setup
=> load_module
----------------------------

I suppose, something like this can be used as workaround:

* for randomize_text():
-----------
diff --git a/kernel/module.c b/kernel/module.c
index 0f4f4e567a42..a2473db1d0a3 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -2433,7 +2433,7 @@ static void randomize_text(struct module *mod,
struct load_info *info)
if (sec == 0)
return;

- text_list = kmalloc_array(max_sections, sizeof(*text_list), GFP_KERNEL);
+ text_list = kvmalloc_array(max_sections, sizeof(*text_list), GFP_KERNEL);
if (!text_list)
return;

@@ -2466,7 +2466,7 @@ static void randomize_text(struct module *mod,
struct load_info *info)
shdr->sh_entsize = get_offset(mod, &size, shdr, 0);
}

- kfree(text_list);
+ kvfree(text_list);
}

/* Lay out the SHF_ALLOC sections in a way not dissimilar to how ld
-----------
* for add_sect_attrs():
-----------
diff --git a/kernel/module.c b/kernel/module.c
index 0f4f4e567a42..a2473db1d0a3 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1541,7 +1541,7 @@ static void free_sect_attrs(struct
module_sect_attrs *sect_attrs)

for (section = 0; section < sect_attrs->nsections; section++)
kfree(sect_attrs->attrs[section].battr.attr.name);
- kfree(sect_attrs);
+ kvfree(sect_attrs);
}

static void add_sect_attrs(struct module *mod, const struct load_info
*info)
@@ -1558,7 +1558,7 @@ static void add_sect_attrs(struct module *mod,
const struct load_info *info)
size[0] = ALIGN(struct_size(sect_attrs, attrs, nloaded),
sizeof(sect_attrs->grp.bin_attrs[0]));
size[1] = (nloaded + 1) * sizeof(sect_attrs->grp.bin_attrs[0]);
- sect_attrs = kzalloc(size[0] + size[1], GFP_KERNEL);
+ sect_attrs = kvzalloc(size[0] + size[1], GFP_KERNEL);
if (sect_attrs == NULL)
return;

-----------
[1] https://github.com/dynup/kpatch/pull/1131

Regards,
Evgenii

\
 
 \ /
  Last update: 2020-08-03 13:49    [W:0.380 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site