lkml.org 
[lkml]   [2022]   [Jun]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v2 4/6] perf cpumap: Fix alignment for masks in event encoding
    On Tue, Jun 14, 2022 at 3:44 PM Namhyung Kim <namhyung@kernel.org> wrote:
    >
    > Hi Ian,
    >
    > On Tue, Jun 14, 2022 at 7:34 AM Ian Rogers <irogers@google.com> wrote:
    > >
    > > A mask encoding of a cpu map is laid out as:
    > > u16 nr
    > > u16 long_size
    > > unsigned long mask[];
    > > However, the mask may be 8-byte aligned meaning there is a 4-byte pad
    > > after long_size. This means 32-bit and 64-bit builds see the mask as
    > > being at different offsets. On top of this the structure is in the byte
    > > data[] encoded as:
    > > u16 type
    > > char data[]
    > > This means the mask's struct isn't the required 4 or 8 byte aligned, but
    > > is offset by 2. Consequently the long reads and writes are causing
    > > undefined behavior as the alignment is broken.
    > >
    > > Fix the mask struct by creating explicit 32 and 64-bit variants, use a
    > > union to avoid data[] and casts; the struct must be packed so the
    > > layout matches the existing perf.data layout. Taking an address of a
    > > member of a packed struct breaks alignment so pass the packed
    > > perf_record_cpu_map_data to functions, so they can access variables with
    > > the right alignment.
    > >
    > > As the 64-bit version has 4 bytes of padding, optimizing writing to only
    > > write the 32-bit version.
    > >
    > > Signed-off-by: Ian Rogers <irogers@google.com>
    > > ---
    > > tools/lib/perf/include/perf/event.h | 36 +++++++++++--
    > > tools/perf/tests/cpumap.c | 19 ++++---
    > > tools/perf/util/cpumap.c | 80 +++++++++++++++++++++++------
    > > tools/perf/util/cpumap.h | 4 +-
    > > tools/perf/util/session.c | 30 +++++------
    > > tools/perf/util/synthetic-events.c | 34 +++++++-----
    > > 6 files changed, 143 insertions(+), 60 deletions(-)
    > >
    > > diff --git a/tools/lib/perf/include/perf/event.h b/tools/lib/perf/include/perf/event.h
    > > index e7758707cadd..d2d32589758a 100644
    > > --- a/tools/lib/perf/include/perf/event.h
    > > +++ b/tools/lib/perf/include/perf/event.h
    > > @@ -6,6 +6,7 @@
    > > #include <linux/types.h>
    > > #include <linux/limits.h>
    > > #include <linux/bpf.h>
    > > +#include <linux/compiler.h>
    > > #include <sys/types.h> /* pid_t */
    > >
    > > #define event_contains(obj, mem) ((obj).header.size > offsetof(typeof(obj), mem))
    > > @@ -153,20 +154,47 @@ enum {
    > > PERF_CPU_MAP__MASK = 1,
    > > };
    > >
    > > +/*
    > > + * Array encoding of a perf_cpu_map where nr is the number of entries in cpu[]
    > > + * and each entry is a value for a CPU in the map.
    > > + */
    > > struct cpu_map_entries {
    > > __u16 nr;
    > > __u16 cpu[];
    > > };
    > >
    > > -struct perf_record_record_cpu_map {
    > > +/* Bitmap encoding of a perf_cpu_map where bitmap entries are 32-bit. */
    > > +struct perf_record_mask_cpu_map32 {
    > > + /* Number of mask values. */
    > > __u16 nr;
    > > + /* Constant 4. */
    > > __u16 long_size;
    > > - unsigned long mask[];
    > > + /* Bitmap data. */
    > > + __u32 mask[];
    > > };
    > >
    > > -struct perf_record_cpu_map_data {
    > > +/* Bitmap encoding of a perf_cpu_map where bitmap entries are 64-bit. */
    > > +struct perf_record_mask_cpu_map64 {
    > > + /* Number of mask values. */
    > > + __u16 nr;
    > > + /* Constant 8. */
    > > + __u16 long_size;
    > > + /* Legacy padding. */
    > > + char __pad[4];
    > > + /* Bitmap data. */
    > > + __u64 mask[];
    > > +};
    > > +
    > > +struct __packed perf_record_cpu_map_data {
    > > __u16 type;
    > > - char data[];
    > > + union {
    > > + /* Used when type == PERF_CPU_MAP__CPUS. */
    > > + struct cpu_map_entries cpus_data;
    > > + /* Used when type == PERF_CPU_MAP__MASK and long_size == 4. */
    > > + struct perf_record_mask_cpu_map32 mask32_data;
    > > + /* Used when type == PERF_CPU_MAP__MASK and long_size == 8. */
    > > + struct perf_record_mask_cpu_map64 mask64_data;
    > > + };
    > > };
    >
    > How about moving the 'type' to the union as well?
    > This way we don't need to pack the entire struct
    > and can have a common struct for 32 and 64 bit..
    >
    > struct cpu_map_entries {
    > __u16 type;
    > __u16 nr;
    > __u16 cpu[];
    > };
    >
    > struct perf_record_mask_cpu_map {
    > __u16 type;
    > __u16 nr;
    > __u16 long_size; // still needed?
    > __u16 pad;
    > unsigned long mask[];
    > };
    >
    > // changed it to union
    > union perf_record_cpu_map_data {
    > __u16 type;
    > struct cpu_map_entries cpus_data;
    > struct perf_record_mask_cpu_map mask_data;
    > };



    Thanks Namhyung,

    Unfortunately this doesn't quite work as I want to make it so that the
    existing cpu map encodings work with this change - ie, an old
    perf.data should be readable in a newer perf with this change (the
    range encoding will require that new perf.data files are read by
    versions of perf with these changes). For this to work I need the
    layout to match the existing unaligned code, I either need to make
    mask bytes and memcpy or use an attribute like packed. Fwiw, this is a
    little more efficient than the layout above as with long_size == 4 the
    pad isn't necessary saving 2 bytes. I think with the packed approach
    we can also add new unpacked variants like above, although I'd be keen
    not to use a type that varies in size like long. I guess at some
    future date we could remove the legacy supporting packed versions so
    that packed or byte copying is unnecessary.

    I could use a union as you show above, unfortunately that will need
    the 'struct perf_record_mask_cpu_map32' and 'struct
    perf_record_mask_cpu_map64' to be packed or to use bytes. We'd lose
    one use of packed just to introduce two others. Potentially it is more
    of a breaking change for users of this code via libperf.

    These changes are something of a bug report along with fixes. If there
    is a consensus that the right way to fix the bug is to break legacy
    perf.data files then I'm happy to update the code accordingly (as you
    show above).

    Thanks,
    Ian

    > Thanks,
    > Namhyung

    \
     
     \ /
      Last update: 2022-06-15 01:52    [W:5.525 / U:0.944 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site