lkml.org 
[lkml]   [2022]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.17 0007/1126] locking/lockdep: Avoid potential access of invalid memory in lock_class
    Date
    From: Waiman Long <longman@redhat.com>

    commit 61cc4534b6550997c97a03759ab46b29d44c0017 upstream.

    It was found that reading /proc/lockdep after a lockdep splat may
    potentially cause an access to freed memory if lockdep_unregister_key()
    is called after the splat but before access to /proc/lockdep [1]. This
    is due to the fact that graph_lock() call in lockdep_unregister_key()
    fails after the clearing of debug_locks by the splat process.

    After lockdep_unregister_key() is called, the lock_name may be freed
    but the corresponding lock_class structure still have a reference to
    it. That invalid memory pointer will then be accessed when /proc/lockdep
    is read by a user and a use-after-free (UAF) error will be reported if
    KASAN is enabled.

    To fix this problem, lockdep_unregister_key() is now modified to always
    search for a matching key irrespective of the debug_locks state and
    zap the corresponding lock class if a matching one is found.

    [1] https://lore.kernel.org/lkml/77f05c15-81b6-bddd-9650-80d5f23fe330@i-love.sakura.ne.jp/

    Fixes: 8b39adbee805 ("locking/lockdep: Make lockdep_unregister_key() honor 'debug_locks' again")
    Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
    Signed-off-by: Waiman Long <longman@redhat.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Bart Van Assche <bvanassche@acm.org>
    Cc: Cheng-Jui Wang <cheng-jui.wang@mediatek.com>
    Link: https://lkml.kernel.org/r/20220103023558.1377055-1-longman@redhat.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    kernel/locking/lockdep.c | 24 +++++++++++++++---------
    1 file changed, 15 insertions(+), 9 deletions(-)

    --- a/kernel/locking/lockdep.c
    +++ b/kernel/locking/lockdep.c
    @@ -6290,7 +6290,13 @@ void lockdep_reset_lock(struct lockdep_m
    lockdep_reset_lock_reg(lock);
    }

    -/* Unregister a dynamically allocated key. */
    +/*
    + * Unregister a dynamically allocated key.
    + *
    + * Unlike lockdep_register_key(), a search is always done to find a matching
    + * key irrespective of debug_locks to avoid potential invalid access to freed
    + * memory in lock_class entry.
    + */
    void lockdep_unregister_key(struct lock_class_key *key)
    {
    struct hlist_head *hash_head = keyhashentry(key);
    @@ -6305,10 +6311,8 @@ void lockdep_unregister_key(struct lock_
    return;

    raw_local_irq_save(flags);
    - if (!graph_lock())
    - goto out_irq;
    + lockdep_lock();

    - pf = get_pending_free();
    hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
    if (k == key) {
    hlist_del_rcu(&k->hash_entry);
    @@ -6316,11 +6320,13 @@ void lockdep_unregister_key(struct lock_
    break;
    }
    }
    - WARN_ON_ONCE(!found);
    - __lockdep_free_key_range(pf, key, 1);
    - call_rcu_zapped(pf);
    - graph_unlock();
    -out_irq:
    + WARN_ON_ONCE(!found && debug_locks);
    + if (found) {
    + pf = get_pending_free();
    + __lockdep_free_key_range(pf, key, 1);
    + call_rcu_zapped(pf);
    + }
    + lockdep_unlock();
    raw_local_irq_restore(flags);

    /* Wait until is_dynamic_key() has finished accessing k->hash_entry. */

    \
     
     \ /
      Last update: 2022-04-05 09:40    [W:4.069 / U:0.036 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site