lkml.org 
[lkml]   [2022]   [Nov]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v3] mm: Make vmalloc_dump_obj() call in clean context
Date
Currently, the mem_dump_obj() is invoked in call_rcu(), the
call_rcu() is maybe invoked in non-preemptive code segment,
for object allocated from vmalloc(), the following scenarios
may occur:

CPU 0
tasks context
spin_lock(&vmap_area_lock)
Interrupt context
call_rcu()
mem_dump_obj
vmalloc_dump_obj
spin_lock(&vmap_area_lock) <--deadlock

and for PREEMPT-RT kernel, the spinlock will convert to sleepable
lock, so the vmap_area_lock spinlock not allowed to get in non-preemptive
code segment. therefore, this commit make the vmalloc_dump_obj() call in
a clean context.

Signed-off-by: Zqiang <qiang1.zhang@intel.com>
---
v1->v2:
add IS_ENABLED(CONFIG_PREEMPT_RT) check.
v2->v3:
change commit message and add some comment.

mm/util.c | 4 +++-
mm/vmalloc.c | 25 +++++++++++++++++++++++++
2 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/mm/util.c b/mm/util.c
index 12984e76767e..2b0222a728cc 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -1128,7 +1128,9 @@ void mem_dump_obj(void *object)
return;

if (virt_addr_valid(object))
- type = "non-slab/vmalloc memory";
+ type = "non-slab memory";
+ else if (is_vmalloc_addr(object))
+ type = "vmalloc memory";
else if (object == NULL)
type = "NULL pointer";
else if (object == ZERO_SIZE_PTR)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ccaa461998f3..4351eafbe7ab 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4034,6 +4034,31 @@ bool vmalloc_dump_obj(void *object)
struct vm_struct *vm;
void *objp = (void *)PAGE_ALIGN((unsigned long)object);

+ /* for non-vmalloc addr, return directly */
+ if (!is_vmalloc_addr(objp))
+ return false;
+
+ /**
+ * for non-Preempt-RT kernel, return directly. otherwise not
+ * only needs to determine whether it is in the interrupt context
+ * (in_interrupt())to avoid deadlock, but also to avoid acquire
+ * vmap_area_lock spinlock in disables interrupts or preempts
+ * critical sections, because the vmap_area_lock spinlock convert
+ * to sleepable lock
+ */
+ if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible())
+ return false;
+
+ /**
+ * get here, for Preempt-RT kernel, it means that we are in
+ * preemptible context(preemptible() is true), it also means
+ * that the in_interrupt() will return false.
+ * for non-Preempt-RT kernel, only needs to determine whether
+ * it is in the interrupt context(in_interrupt()) to avoid deadlock
+ */
+ if (in_interrupt())
+ return false;
+
vm = find_vm_area(objp);
if (!vm)
return false;
--
2.25.1
\
 
 \ /
  Last update: 2022-11-18 01:29    [W:0.067 / U:2.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site