lkml.org 
[lkml]   [2020]   [Jul]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: WARNING: at mm/mremap.c:211 move_page_tables in i386
On Thu, Jul 9, 2020 at 9:29 PM Naresh Kamboju <naresh.kamboju@linaro.org> wrote:
>
> Your patch applied and re-tested.
> warning triggered 10 times.
>
> old: bfe00000-c0000000 new: bfa00000 (val: 7d530067)

Hmm.. It's not even the overlapping case, it's literally just "move
exactly 2MB of page tables exactly one pmd down". Which should be the
nice efficient case where we can do it without modifying the lower
page tables at all, we just move the PMD entry.

There shouldn't be anything in the new address space from bfa00000-bfdfffff.

That PMD value obviously says differently, but it looks like a nice
normal PMD value, nothing bad there.

I'm starting to think that the issue might be that this is because the
stack segment is special. Not only does it have the growsdown flag,
but that whole thing has the magic guard page logic.

So I wonder if we have installed a guard page _just_ below the old
stack, so that we have populated that pmd because of that.

We used to have an _actual_ guard page and then play nasty games with
vm_start logic. We've gotten rid of that, though, and now we have that
"stack_guard_gap" logic that _should_ mean that vm_start is always
exact and proper (and that pgtbales_free() should have emptied it, but
maybe we have some case we forgot about.

> [ 741.511684] WARNING: CPU: 1 PID: 15173 at mm/mremap.c:211 move_page_tables.cold+0x0/0x2b
> [ 741.598159] Call Trace:
> [ 741.600694] setup_arg_pages+0x22b/0x310
> [ 741.621687] load_elf_binary+0x31e/0x10f0
> [ 741.633839] __do_execve_file+0x5a8/0xbf0
> [ 741.637893] __ia32_sys_execve+0x2a/0x40
> [ 741.641875] do_syscall_32_irqs_on+0x3d/0x2c0
> [ 741.657660] do_fast_syscall_32+0x60/0xf0
> [ 741.661691] do_SYSENTER_32+0x15/0x20
> [ 741.665373] entry_SYSENTER_32+0x9f/0xf2
> [ 741.734151] old: bfe00000-c0000000 new: bfa00000 (val: 7d530067)

Nothing looks bad, and the ELF loading phase memory map should be
really quite simple.

The only half-way unusual thing is that you have basically exactly 2MB
of stack at execve time (easy enough to tune by just setting argv/env
right), and it's moved down by exactly 2MB.

And that latter thing is just due to randomization, see
arch_align_stack() in arch/x86/kernel/process.c.

So that would explain why it doesn't happen every time.

What happens if you apply the attached patch to *always* force the 2MB
shift (rather than moving the stack by a random amount), and then run
the other program (t.c -> compiled to "a.out").

The comment should be obvious. But it's untested, I might have gotten
the math wrong. I don't run in a 32-bit environment.

Linus
[unhandled content-type:application/octet-stream]#define _GNU_SOURCE
#include <unistd.h>

static char one_kb[1024] = {
[0 ... 1022] = 'a',
0
};

/*
* Each string is 1kB, so we would need 2048 strings to fill a 2MB stack.
*
* But we have the string pointers themselves: 4 bytes per string, so
* that would be an additional 8kB on top of the 2MB of strings. Plus
* we have the two NULL terminators (8 bytes) for argv/envp.
*
* And then we have the ELF AUX fields, which is a few hundred bytes too.
*
* And then we need the call stack frame etc, and only need to come within
* 4kB of the 2MB stack target.
*
* So instead of using 2048 strings to fill up 2MB exactly, we want to fill up
* basically 2MB-12kB, and let the AUX info etc go into the last page.
*
* So 2036 1kB strings, plus noise.
*/

static char *argv[] = {
[0] = "/bin/echo",
[1 ... 2036] = one_kb,
NULL
};

static char *envp[] = {
NULL
};

int main(int argc, char **envp)
{
/*
* Don't do this recursively, and sleep so people can look at /proc/<pid>/maps
*/
if (argc > 1000) {
sleep(100);
return 0;
}
return execvpe("./a.out", argv, envp);
}
\
 
 \ /
  Last update: 2020-07-10 07:26    [W:0.117 / U:0.432 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site