lkml.org 
[lkml]   [2021]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 1/2] perf/core: Share an event with multiple cgroups
Hi,

I would like to re-emphasize why this patch is important. As Namhyung
outlined in his cover message,
cgroup monitoring build on top of per-cpu monitoring and offers
maximum flexibility by allowing each event
to be attached to a single cgroup. Although this is fine when the
machines were much smaller and the number
of simultaneous cgroups was also small, it does not work anymore with
today's machines and even less with future
machines. Over the last couple of years, we have tried to make cgroup
monitoring more scalable. Ian Rogers
patch series addressed the RB-tree handling of the event to avoid
walking the whole tree to find events from the
sched in cgroup. This helped reduce some of the overhead we are seeing
and which is causing serious problems
to our end users, forcing them to tone down monitoring and slice
collection over cgroup over time which is far from
ideal.

Namhyung's series goes a lot further, by addressing two key overhead factors:
1- the file descriptor consumption explosion
2- the context switch overhead

Again this is a major cause of problems for us and needed to be
addressed in a way that maintained backward compatibility.
We are interested in the case where the same events are measured
across all cgroups and I believe this is a common usage
model.

1/ File descriptor issue

With the current interface, if you want to monitor 10 events on a 112
CPU server across 200 cgroups, you need:

num_fds = num_events x num_cpus x num_cgroups = 10 x 112 x 200 =
224,000 descriptors

A usual Linux distribution allows around 1024. Although if you are
root, you could increase the limit, this has some other impact to the
system: the memory footprint in kernel memory to back these file
descriptors and struct perf_event is large (see our presentation at
LPC2019).

2/ Context switch overhead

Each time you have a cgroup switch, i.e., a context switch where you
switch cgroups, then you incur a PMU event reschedule. A cgroup sched
in
is more expensive than a per-process sched in because you have to find
the events which are relevant to the next cgroup, i.e., you may have
to
walk more entries in the RB-tree. If the events are identical across
cgroups, you may end up paying that cost to reinstall the same events
(ignoring
multiplexing).
Furthermore, event scheduling is an expensive operation because of
memory access and PMU register accesses. It is always best, if it can
be avoided.
From our experience, that has caused significant overhead in our
systems to the point where we have to reduce the interval at which we
collect the data
and the number of cgroups we can monitor at once.


3/ Namhyung's solution

I personally like Namhyung's solution to the problem because it fits
within the interface, does not break existing per-cgroup mode. The
implementation is fairly
simple and non-invasive. It provides a very significant reduction of
overhead on BOTH the file descriptor pressure and context switch
overheads. It matches perfectly
with the common usage model of monitoring the same events across
multiple cgroups simultaneously. The patch does not disrupt existing
perf_event_open() or
read()/close() syscalls. Everything is handled via a pair of new ioctl().

It eliminates the file descriptor overhead as follows using the same
example as before:

Before:
num_fds = num_events x num_cpus x num_cgroups = 10 x 112 x 200 =
224,000 descriptors
After:
num_fds = num_events x num_cpus = 10 x 112 = 1120 descriptors
(200x reduction in fds and the memory savings that go with that in the
kernel)

In other words, it reduces the file descriptor consumption to what is
necessary for plain system-wide monitoring.

On context switch, the kernel computes the event delta and stores into
a hash table, i.e., a single PMU register access instead of the full
PMU rescheduling.
The delta is propagated to the proper cgroup hierarchy if needed.

The change is generic and benefits ALL processor architectures in the
same manner.

We have tested the patch on our servers with large configurations and
it has delivered significant savings and enabled monitoring of more
cgroups simultaneously
instead of monitoring in batches which never yielded a consistent view
of the system.

Furthermore, the patches could be extended to add, as Song Lu
suggested, the possibility of deleting cgroups attached to an event to
allow continuous monitoring
without having to restart the monitoring tool. I believe the extension
can be further improved by allowing a vector read of the counts as
well. That would eliminate a
significant number of ioctl(READ) syscalls.

Overall, I think this patch series delivers significant value-add to
the perf_events interface and should be committed ASAP.

Thanks.




On Tue, Mar 30, 2021 at 8:11 AM Namhyung Kim <namhyung@kernel.org> wrote:
>
> On Tue, Mar 30, 2021 at 3:33 PM Song Liu <songliubraving@fb.com> wrote:
> > > On Mar 29, 2021, at 4:33 AM, Namhyung Kim <namhyung@kernel.org> wrote:
> > >
> > > On Mon, Mar 29, 2021 at 2:17 AM Song Liu <songliubraving@fb.com> wrote:
> > >>> On Mar 23, 2021, at 9:21 AM, Namhyung Kim <namhyung@kernel.org> wrote:
> > >>>
> > >>> As we can run many jobs (in container) on a big machine, we want to
> > >>> measure each job's performance during the run. To do that, the
> > >>> perf_event can be associated to a cgroup to measure it only.
> > >>>
> >
> > [...]
> >
> > >>> + return 0;
> > >>> +}
> > >>
> > >> Could you please explain why we need this logic in can_attach?
> > >
> > > IIUC the ss->attach() is called after a task's cgroup membership
> > > is changed. But we want to collect the performance numbers for
> > > the old cgroup just before the change. As the logic merely checks
> > > the current task's cgroup, it should be done in the can_attach()
> > > which is called before the cgroup change.
> >
> > Thanks for the explanations.
> >
> > Overall, I really like the core idea, especially that the overhead on
> > context switch is bounded (by the depth of cgroup tree).
>
> Thanks!
>
> >
> > Is it possible to make PERF_EVENT_IOC_ATTACH_CGROUP more flexible?
> > Specifically, if we can have
> >
> > PERF_EVENT_IOC_ADD_CGROUP add a cgroup to the list
> > PERF_EVENT_IOC_EL_CGROUP delete a cgroup from the list
> >
> > we can probably share these events among multiple processes, and
> > these processes don't need to know others' cgroup list. I think
> > this will be useful for users to build customized monitoring in
> > its own container.
> >
> > Does this make sense?
>
> Maybe we can add ADD/DEL interface for more flexible monitoring
> but I'm not sure which use cases it'll be used actually.
>
> For your multi-process sharing case, the original events' file
> descriptors should be shared first. Also adding and deleting
> (or just reading) arbitrary cgroups from a container can be a
> security concern IMHO.
>
> So I just focused on the single-process multi-cgroup case which is
> already used (perf stat --for-each-cgroup) and very important in my
> company's setup. In this case we have a list of interested cgroups
> from the beginning so it's more efficient to create a properly sized
> hash table and all the nodes at once.
>
> Thanks,
> Namhyung

\
 
 \ /
  Last update: 2021-04-01 08:29    [W:0.077 / U:0.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site