lkml.org 
[lkml]   [2022]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 2/2] mm/kmemleak: Fix UAF bug in kmemleak_scan()
Date
Commit 6edda04ccc7c ("mm/kmemleak: prevent soft lockup in first
object iteration loop of kmemleak_scan()") fixes soft lockup problem
in kmemleak_scan() by periodically doing a cond_resched(). It does
take a reference of the current object before doing it. Unfortunately,
if the object has been deleted from the object_list, the next object
pointed to by its next pointer may no longer be valid after coming
back from cond_resched(). This can result in use-after-free and other
nasty problem.

Fix this problem by restarting the object scan from the beginning of
the object_list in case the object has been de-allocated after returning
from cond_resched().

Fixes: 6edda04ccc7c ("mm/kmemleak: prevent soft lockup in first object iteration loop of kmemleak_scan()")
Signed-off-by: Waiman Long <longman@redhat.com>
---
mm/kmemleak.c | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 8c44f70ed457..d3a8fa4e3af3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1465,15 +1465,26 @@ static void scan_gray_list(void)
* that the given object won't go away without RCU read lock by performing a
* get_object() if necessaary.
*/
-static void kmemleak_cond_resched(struct kmemleak_object *object)
+static void kmemleak_cond_resched(struct kmemleak_object **pobject)
{
- if (!get_object(object))
+ struct kmemleak_object *obj = *pobject;
+
+ if (!(obj->flags & OBJECT_ALLOCATED) || !get_object(obj))
return; /* Try next object */

rcu_read_unlock();
cond_resched();
rcu_read_lock();
- put_object(object);
+ put_object(obj);
+
+ /*
+ * In the unlikely event that the object had been de-allocated, we
+ * have to restart the scanning from the beginning of the object_list
+ * as the object pointed to by the next pointer may have been freed.
+ */
+ if (unlikely(!(obj->flags & OBJECT_ALLOCATED)))
+ *pobject = list_entry_rcu(object_list.next,
+ typeof(*obj), object_list);
}

/*
@@ -1524,7 +1535,7 @@ static void kmemleak_scan(void)
raw_spin_unlock_irq(&object->lock);

if (need_resched())
- kmemleak_cond_resched(object);
+ kmemleak_cond_resched(&object);
}
rcu_read_unlock();

@@ -1593,7 +1604,7 @@ static void kmemleak_scan(void)
rcu_read_lock();
list_for_each_entry_rcu(object, &object_list, object_list) {
if (need_resched())
- kmemleak_cond_resched(object);
+ kmemleak_cond_resched(&object);

/*
* This is racy but we can save the overhead of lock/unlock
@@ -1630,7 +1641,7 @@ static void kmemleak_scan(void)
rcu_read_lock();
list_for_each_entry_rcu(object, &object_list, object_list) {
if (need_resched())
- kmemleak_cond_resched(object);
+ kmemleak_cond_resched(&object);

/*
* This is racy but we can save the overhead of lock/unlock
--
2.31.1
\
 
 \ /
  Last update: 2022-12-11 00:03    [W:0.125 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site