lkml.org 
[lkml]   [2013]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v6 00/15] memory-hotplug: hot-remove physical memory
    On 01/10/2013 11:31 AM, Kamezawa Hiroyuki wrote:
    > (2013/01/10 16:14), Glauber Costa wrote:
    >> On 01/10/2013 06:17 AM, Tang Chen wrote:
    >>>>> Note: if the memory provided by the memory device is used by the
    >>>>> kernel, it
    >>>>> can't be offlined. It is not a bug.
    >>>>
    >>>> Right. But how often does this happen in testing? In other words,
    >>>> please provide an overall description of how well memory hot-remove is
    >>>> presently operating. Is it reliable? What is the success rate in
    >>>> real-world situations?
    >>>
    >>> We test the hot-remove functionality mostly with movable_online used.
    >>> And the memory used by kernel is not allowed to be removed.
    >>
    >> Can you try doing this using cpusets configured to hardwall ?
    >> It is my understanding that the object allocators will try hard not to
    >> allocate anything outside the walls defined by cpuset. Which means that
    >> if you have one process per node, and they are hardwalled, your kernel
    >> memory will be spread evenly among the machine. With a big enough load,
    >> they should eventually be present in all blocks.
    >>
    >
    > I'm sorry I couldn't catch your point.
    > Do you want to confirm whether cpuset can work enough instead of
    > ZONE_MOVABLE ?
    > Or Do you want to confirm whether ZONE_MOVABLE will not work if it's
    > used with cpuset ?
    >
    >
    No, I am not proposing to use cpuset do tackle the problem. I am just
    wondering if you would still have high success rates with cpusets in use
    with hardwalls. This is just one example of a workload that would spread
    kernel memory around quite heavily.

    So this is just me trying to understand the limitations of the mechanism.

    >> Another question I have for you: Have you considering calling
    >> shrink_slab to try to deplete the caches and therefore free at least
    >> slab memory in the nodes that can't be offlined? Is it relevant?
    >>
    >
    > At this stage, we don't consider to call shrink_slab(). We require
    > nearly 100% success at offlining memory for removing DIMM.
    > It's my understanding.
    >
    Of course, this is indisputable.

    > IMHO, I don't think shrink_slab() can kill all objects in a node even
    > if they are some caches. We need more study for doing that.
    >

    Indeed, shrink_slab can only kill cached objects. They, however, are
    usually a very big part of kernel memory. I wonder though if in case of
    failure, it is worth it to try at least one shrink pass before you give up.

    It is not very different from what is in memory-failure.c, except that
    we could do better and do a more targetted shrinking (support for that
    is being worked on)




    \
     
     \ /
      Last update: 2013-01-10 09:43    [W:7.065 / U:0.144 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site