lkml.org 
[lkml]   [2019]   [Mar]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC] simple_lmk: Introduce Simple Low Memory Killer for Android
    On Fri, Mar 15, 2019 at 02:13:24PM -0400, Joel Fernandes wrote:
    > On Fri, Mar 15, 2019 at 07:03:07PM +0100, Christian Brauner wrote:
    > > On Thu, Mar 14, 2019 at 09:36:43PM -0700, Daniel Colascione wrote:
    > > > On Thu, Mar 14, 2019 at 8:16 PM Steven Rostedt <rostedt@goodmis.org> wrote:
    > > > >
    > > > > On Thu, 14 Mar 2019 13:49:11 -0700
    > > > > Sultan Alsawaf <sultan@kerneltoast.com> wrote:
    > > > >
    > > > > > Perhaps I'm missing something, but if you want to know when a process has died
    > > > > > after sending a SIGKILL to it, then why not just make the SIGKILL optionally
    > > > > > block until the process has died completely? It'd be rather trivial to just
    > > > > > store a pointer to an onstack completion inside the victim process' task_struct,
    > > > > > and then complete it in free_task().
    > > > >
    > > > > How would you implement such a method in userspace? kill() doesn't take
    > > > > any parameters but the pid of the process you want to send a signal to,
    > > > > and the signal to send. This would require a new system call, and be
    > > > > quite a bit of work.
    > > >
    > > > That's what the pidfd work is for. Please read the original threads
    > > > about the motivation and design of that facility.
    > > >
    > > > > If you can solve this with an ebpf program, I
    > > > > strongly suggest you do that instead.
    > > >
    > > > Regarding process death notification: I will absolutely not support
    > > > putting aBPF and perf trace events on the critical path of core system
    > > > memory management functionality. Tracing and monitoring facilities are
    > > > great for learning about the system, but they were never intended to
    > > > be load-bearing. The proposed eBPF process-monitoring approach is just
    > > > a variant of the netlink proposal we discussed previously on the pidfd
    > > > threads; it has all of its drawbacks. We really need a core system
    > > > call --- really, we've needed robust process management since the
    > > > creation of unix --- and I'm glad that we're finally getting it.
    > > > Adding new system calls is not expensive; going to great lengths to
    > > > avoid adding one is like calling a helicopter to avoid crossing the
    > > > street. I don't think we should present an abuse of the debugging and
    > > > performance monitoring infrastructure as an alternative to a robust
    > > > and desperately-needed bit of core functionality that's neither hard
    > > > to add nor complex to implement nor expensive to use.
    > > >
    > > > Regarding the proposal for a new kernel-side lmkd: when possible, the
    > > > kernel should provide mechanism, not policy. Putting the low memory
    > > > killer back into the kernel after we've spent significant effort
    > > > making it possible for userspace to do that job. Compared to kernel
    > > > code, more easily understood, more easily debuggable, more easily
    > > > updated, and much safer. If we *can* move something out of the kernel,
    > > > we should. This patch moves us in exactly the wrong direction. Yes, we
    > > > need *something* that sits synchronously astride the page allocation
    > > > path and does *something* to stop a busy beaver allocator that eats
    > > > all the available memory before lmkd, even mlocked and realtime, can
    > > > respond. The OOM killer is adequate for this very rare case.
    > > >
    > > > With respect to kill timing: Tim is right about the need for two
    > > > levels of policy: first, a high-level process prioritization and
    > > > memory-demand balancing scheme (which is what OOM score adjustment
    > > > code in ActivityManager amounts to); and second, a low-level
    > > > process-killing methodology that maximizes sustainable memory reclaim
    > > > and minimizes unwanted side effects while killing those processes that
    > > > should be dead. Both of these policies belong in userspace --- because
    > > > they *can* be in userspace --- and userspace needs only a few tools,
    > > > most of which already exist, to do a perfectly adequate job.
    > > >
    > > > We do want killed processes to die promptly. That's why I support
    > > > boosting a process's priority somehow when lmkd is about to kill it.
    > > > The precise way in which we do that --- involving not only actual
    > > > priority, but scheduler knobs, cgroup assignment, core affinity, and
    > > > so on --- is a complex topic best left to userspace. lmkd already has
    > > > all the knobs it needs to implement whatever priority boosting policy
    > > > it wants.
    > > >
    > > > Hell, once we add a pidfd_wait --- which I plan to work on, assuming
    > > > nobody beats me to it, after pidfd_send_signal lands --- you can
    > >
    > > Daniel,
    > >
    > > I've just been talking to Joel.
    > > I actually "expected" you to work pidfd_wait() after prior
    > > conversations we had on the pidfd_send_signal() patchsets. :) That's why
    > > I got a separate git tree on kernel.org since I expect a lot more work
    > > to come. I hope that Linus still decides to pull pidfd_send_signal()
    > > before Sunday (For the ones who have missed the link in a prior response
    > > of mine:
    > > https://lkml.org/lkml/2019/3/12/439
    > >
    > > This is the first merge window I sent this PR.
    > >
    > > The pidfd tree has a branch for-next that is already tracked by Stephen
    > > in linux-next since the 5.0 merge window. The patches for
    > > pidfd_send_signal() sit in the pidfd branch.
    > > I'd be happy to share the tree with you and Joel (We can rename it if
    > > you prefer I don't care).
    > > I would really like to centralize this work so that we sort of have a
    > > "united front" and end up with a coherent api and can send PRs from a
    > > centralized place:
    > > https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/
    >
    > I am totally onboard with working together / reviewing this work with you all
    > on a common tree somewhere (Christian's pidfd tree is fine). I was curious,

    Excellent.

    > why do we want to add a new syscall (pidfd_wait) though? Why not just use
    > standard poll/epoll interface on the proc fd like Daniel was suggesting.
    > AFAIK, once the proc file is opened, the struct pid is essentially pinned
    > even though the proc number may be reused. Then the caller can just poll.
    > We can add a waitqueue to struct pid, and wake up any waiters on process
    > death (A quick look shows task_struct can be mapped to its struct pid) and
    > also possibly optimize it using Steve's TIF flag idea. No new syscall is
    > needed then, let me know if I missed something?

    Huh, I thought that Daniel was against the poll/epoll solution?
    I have no clear opinion on what is better at the moment since I have
    been mostly concerned with getting pidfd_send_signal() into shape and
    was reluctant to put more ideas/work into this if it gets shutdown.
    Once we have pidfd_send_signal() the wait discussion makes sense.

    Thanks!
    Christian

    \
     
     \ /
      Last update: 2019-03-15 19:25    [W:2.601 / U:0.088 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site