lkml.org 
[lkml]   [2022]   [Nov]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 1/7] mm: vmalloc: Add alloc_vmap_area trace event
On Tue, 18 Oct 2022 20:10:47 +0200
"Uladzislau Rezki (Sony)" <urezki@gmail.com> wrote:

> It is for a debug purpose and for validation of passed parameters.
>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
> include/trace/events/vmalloc.h | 56 ++++++++++++++++++++++++++++++++++
> 1 file changed, 56 insertions(+)
> create mode 100644 include/trace/events/vmalloc.h
>
> diff --git a/include/trace/events/vmalloc.h b/include/trace/events/vmalloc.h
> new file mode 100644
> index 000000000000..39fbd77c91e7
> --- /dev/null
> +++ b/include/trace/events/vmalloc.h
> @@ -0,0 +1,56 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM vmalloc
> +
> +#if !defined(_TRACE_VMALLOC_H) || defined(TRACE_HEADER_MULTI_READ)
> +#define _TRACE_VMALLOC_H
> +
> +#include <linux/tracepoint.h>
> +
> +/**
> + * alloc_vmap_area - called when a new vmap allocation occurs
> + * @addr: an allocated address
> + * @size: a requested size
> + * @align: a requested alignment
> + * @vstart: a requested start range
> + * @vend: a requested end range
> + * @failed: an allocation failed or not
> + *
> + * This event is used for a debug purpose, it can give an extra
> + * information for a developer about how often it occurs and which
> + * parameters are passed for further validation.
> + */
> +TRACE_EVENT(alloc_vmap_area,
> +
> + TP_PROTO(unsigned long addr, unsigned long size, unsigned long align,
> + unsigned long vstart, unsigned long vend, int failed),
> +
> + TP_ARGS(addr, size, align, vstart, vend, failed),

The above is passed in via (from patch 4):


@@ -1621,6 +1624,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
size, align, vstart, vend);
spin_unlock(&free_vmap_area_lock);

+ trace_alloc_vmap_area(addr, size, align, vstart, vend, addr == vend);
+
/*
* If an allocation fails, the "vend" address is
* returned. Therefore trigger the overflow path.

> +
> + TP_STRUCT__entry(
> + __field(unsigned long, addr)
> + __field(unsigned long, size)
> + __field(unsigned long, align)
> + __field(unsigned long, vstart)
> + __field(unsigned long, vend)

> + __field(int, failed)

I would drop the failed field...

> + ),
> +
> + TP_fast_assign(
> + __entry->addr = addr;
> + __entry->size = size;
> + __entry->align = align;
> + __entry->vstart = vstart;
> + __entry->vend = vend;

And instead have:

__entry->failed = addr == vend;

Why pass in a parameter that can be calculated in the trace event logic?

Other than that, from a tracing perspective:

Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>

for the series.

-- Steve


> + __entry->failed = failed;
> + ),
> +
> + TP_printk("va_start: %lu size=%lu align=%lu vstart=0x%lx vend=0x%lx failed=%d",
> + __entry->addr, __entry->size, __entry->align,
> + __entry->vstart, __entry->vend, __entry->failed)
> +);
> +
> +#endif /* _TRACE_VMALLOC_H */
> +
> +/* This part must be outside protection */
> +#include <trace/define_trace.h>

\
 
 \ /
  Last update: 2022-11-14 16:53    [W:0.181 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site