lkml.org 
[lkml]   [2015]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Subject[PATCH 05/17] x86, mpx: trace ranged MPX operations
From
Date

From: Dave Hansen <dave.hansen@linux.intel.com>

trace when MPX is zapping pages:

When MPX can not free an entire bounds table, it will instead
try to zap unused parts of a bounds table to free the backing
memory. This decreases RSS (resident set size) without
decreasing the virtual space allocated for bounds tables.

trace attempts to find bounds tables:

This event traces any time we go looking to unmap a bounds table
for a given virtual address range. This is useful to ensure
that the kernel actually "tried" to free a bounds table versus
times it succeeded.

It might try and fail if it realized that a table was shared
with an adjacent VMA which is not being unmapped.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
---

b/arch/x86/include/asm/trace/mpx.h | 32 ++++++++++++++++++++++++++++++++
b/arch/x86/mm/mpx.c | 2 ++
2 files changed, 34 insertions(+)

diff -puN arch/x86/include/asm/trace/mpx.h~mpx-trace_unmap_zap arch/x86/include/asm/trace/mpx.h
--- a/arch/x86/include/asm/trace/mpx.h~mpx-trace_unmap_zap 2015-04-22 11:16:19.458876241 -0700
+++ b/arch/x86/include/asm/trace/mpx.h 2015-04-22 11:16:19.462876421 -0700
@@ -53,6 +53,38 @@ TRACE_EVENT(bounds_exception_mpx,
__entry->bndstatus)
);

+DECLARE_EVENT_CLASS(mpx_range_trace,
+
+ TP_PROTO(unsigned long start,
+ unsigned long end),
+ TP_ARGS(start, end),
+
+ TP_STRUCT__entry(
+ __field(unsigned long, start)
+ __field(unsigned long, end)
+ ),
+
+ TP_fast_assign(
+ __entry->start = start;
+ __entry->end = end;
+ ),
+
+ TP_printk("[0x%p:0x%p]",
+ (void *)__entry->start,
+ (void *)__entry->end
+ )
+);
+
+DEFINE_EVENT(mpx_range_trace, mpx_unmap_zap,
+ TP_PROTO(unsigned long start, unsigned long end),
+ TP_ARGS(start, end)
+);
+
+DEFINE_EVENT(mpx_range_trace, mpx_unmap_search,
+ TP_PROTO(unsigned long start, unsigned long end),
+ TP_ARGS(start, end)
+);
+
#else

/*
diff -puN arch/x86/mm/mpx.c~mpx-trace_unmap_zap arch/x86/mm/mpx.c
--- a/arch/x86/mm/mpx.c~mpx-trace_unmap_zap 2015-04-22 11:16:19.459876286 -0700
+++ b/arch/x86/mm/mpx.c 2015-04-22 11:16:19.463876466 -0700
@@ -670,6 +670,7 @@ static int zap_bt_entries(struct mm_stru

len = min(vma->vm_end, end) - addr;
zap_page_range(vma, addr, len, NULL);
+ trace_mpx_unmap_zap(addr, addr+len);

vma = vma->vm_next;
addr = vma->vm_start;
@@ -842,6 +843,7 @@ static int mpx_unmap_tables(struct mm_st
long __user *bd_entry, *bde_start, *bde_end;
unsigned long bt_addr;

+ trace_mpx_unmap_search(start, end);
/*
* "Edge" bounds tables are those which are being used by the region
* (start -> end), but that may be shared with adjacent areas. If they
_

\
 
 \ /
  Last update: 2015-04-22 20:41    [W:0.127 / U:0.256 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site