lkml.org 
[lkml]   [2023]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [syzbot] [net?] KASAN: slab-use-after-free Write in mini_qdisc_pair_swap
    +Cc: Vlad Buslov, Hillf Danton

    Hi all,

    On Mon, Apr 17, 2023 at 04:00:11PM -0700, Peilin Ye wrote:
    > I also reproduced this UAF using the syzkaller reproducer in the report
    > (the C reproducer did not work for me for unknown reasons). I will look
    > into this.

    Currently, multiple ingress (clsact) Qdiscs can access the per-netdev
    *miniq_ingress (*miniq_egress) pointer concurrently. This is
    unfortunately true in two senses:

    1. We allow adding ingress (clsact) Qdiscs under parents other than
    TC_H_INGRESS (TC_H_CLSACT):

    $ ip link add ifb0 numtxqueues 8 type ifb
    $ echo clsact > /proc/sys/net/core/default_qdisc
    $ tc qdisc add dev ifb0 handle 1: root mq
    $ tc qdisc show dev ifb0
    qdisc mq 1: root
    qdisc clsact 0: parent 1:8
    qdisc clsact 0: parent 1:7
    qdisc clsact 0: parent 1:6
    qdisc clsact 0: parent 1:5
    qdisc clsact 0: parent 1:4
    qdisc clsact 0: parent 1:3
    qdisc clsact 0: parent 1:2
    qdisc clsact 0: parent 1:1

    This is obviously racy and should be prohibited. I've started working
    on patches to fix this. The syz repro for this UAF adds ingress Qdiscs
    under TC_H_ROOT, by the way.

    2. After introducing RTNL-lockless RTM_{NEW,DEL,GET}TFILTER requests
    [1], it is possible that, when replacing ingress (clsact) Qdiscs, the
    old one can access *miniq_{in,e}gress concurrently with the new one. For
    example, the syz repro does something like the following:

    Thread 1 creates sch_ingress Qdisc A (containing mini Qdisc a1 and a2),
    then adds a cls_flower filter X to Qdisc A.

    Thread 2 creates sch_ingress Qdisc B (containing mini Qdisc b1 and b2)
    to replace Qdisc A, then adds a cls_flower filter Y to Qdisc B.

    Device has 8 TXQs.

    Thread 1 A's refcnt Thread 2
    RTM_NEWQDISC (A, locked)
    qdisc_create(A) 1
    qdisc_graft(A) 9

    RTM_NEWTFILTER (X, lockless)
    __tcf_qdisc_find(A) 10
    tcf_chain0_head_change(A)
    ! mini_qdisc_pair_swap(A)
    | RTM_NEWQDISC (B, locked)
    | 2 qdisc_graft(B)
    | 1 notify_and_destroy(A)
    |
    | RTM_NEWTFILTER (Y, lockless)
    | tcf_chain0_head_change(B)
    | ! mini_qdisc_pair_swap(B)
    tcf_block_release(A) 0 |
    qdisc_destroy(A) |
    tcf_chain0_head_change_cb_del(A) |
    ! mini_qdisc_pair_swap(A) |
    | |
    ... ...

    As we can see there're interleaving mini_qdisc_pair_swap() calls between
    Qdisc A and B, causing all kinds of troubles, including the UAF (thread
    2 writing to mini Qdisc a1's rcu_state after Qdisc A has already been
    freed) reported by syzbot.

    To fix this, I'm cooking a patch that, when replacing ingress (clsact)
    Qdiscs, in qdisc_graft():

    I. We should make sure there's no on-the-fly lockless filter requests
    for the old Qdisc, and return -EBUSY if there's any (or can/should
    we wait in RTM_NEWQDISC handler?)

    II. We should destory the old Qdisc before publishing the new one
    (i.e. setting it to dev_ingress_queue(dev)->qdisc_sleeping, so
    that subsequent filter requests can see it), because
    {ingress,clsact}_destroy() also call mini_qdisc_pair_swap(), which
    sets *miniq_{in,e}gress to NULL

    Future Qdiscs that support RTNL-lockless cls_ops, if any, won't need
    this fix, as long as their ->chain_head_change() don't access
    out-of-Qdisc-scope data, like pointers in struct net_device.

    Do you think this is the right way to go? Thanks!

    [1] Thanks Hillf Danton for the hint:
    https://syzkaller.appspot.com/text?tag=Patch&x=10d7cd5bc80000

    Thanks,
    Peilin Ye

    \
     
     \ /
      Last update: 2023-04-27 01:43    [W:4.165 / U:0.156 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site