lkml.org 
[lkml]   [2020]   [Dec]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.
    On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:

    > @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
    >
    > reuse->num_socks--;
    > reuse->socks[i] = reuse->socks[reuse->num_socks];
    > + prog = rcu_dereference(reuse->prog);
    >
    > if (sk->sk_protocol == IPPROTO_TCP) {
    > + if (reuse->num_socks && !prog)
    > + nsk = i == reuse->num_socks ? reuse->socks[i - 1] : reuse->socks[i];
    I asked in the earlier thread if the primary use case is to only
    use the bpf prog to pick. That thread did not come to
    a solid answer but did conclude that the sysctl should not
    control the behavior of the BPF_SK_REUSEPORT_SELECT_OR_MIGRATE prog.

    From this change here, it seems it is still desired to only depend
    on the kernel to random pick even when no bpf prog is attached.
    If that is the case, a sysctl to guard here for not changing
    the current behavior makes sense.
    It should still only control the non-bpf-pick behavior:
    when the sysctl is on, the kernel will still do a random pick
    when there is no bpf prog attached to the reuseport group.
    Thoughts?

    > +
    > reuse->num_closed_socks++;
    > reuse->socks[reuse->max_socks - reuse->num_closed_socks] = sk;
    > } else {
    > @@ -264,6 +270,8 @@ void reuseport_detach_sock(struct sock *sk)
    > call_rcu(&reuse->rcu, reuseport_free_rcu);
    > out:
    > spin_unlock_bh(&reuseport_lock);
    > +
    > + return nsk;
    > }
    > EXPORT_SYMBOL(reuseport_detach_sock);

    \
     
     \ /
      Last update: 2020-12-08 07:56    [W:4.694 / U:0.204 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site