lkml.org 
[lkml]   [2022]   [May]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC] EADDRINUSE from bind() on application restart after killing
Hi,

Thank you for your reply.

On 5/25/22 3:13 AM, Eric Dumazet wrote:
> On Tue, May 24, 2022 at 1:19 AM Muhammad Usama Anjum
> <usama.anjum@collabora.com> wrote:
>>
>> Hello,
>>
>> We have a set of processes which talk with each other through a local
>> TCP socket. If the process(es) are killed (through SIGKILL) and
>> restarted at once, the bind() fails with EADDRINUSE error. This error
>> only appears if application is restarted at once without waiting for 60
>> seconds or more. It seems that there is some timeout of 60 seconds for
>> which the previous TCP connection remains alive waiting to get closed
>> completely. In that duration if we try to connect again, we get the error.
>>
>> We are able to avoid this error by adding SO_REUSEADDR attribute to the
>> socket in a hack. But this hack cannot be added to the application
>> process as we don't own it.
>>
>> I've looked at the TCP connection states after killing processes in
>> different ways. The TCP connection ends up in 2 different states with
>> timeouts:
>>
>> (1) Timeout associated with FIN_WAIT_1 state which is set through
>> `tcp_fin_timeout` in procfs (60 seconds by default)
>>
>> (2) Timeout associated with TIME_WAIT state which cannot be changed. It
>> seems like this timeout has come from RFC 1337.
>>
>> The timeout in (1) can be changed. Timeout in (2) cannot be changed. It
>> also doesn't seem feasible to change the timeout of TIME_WAIT state as
>> the RFC mentions several hazards. But we are talking about a local TCP
>> connection where maybe those hazards aren't applicable directly? Is it
>> possible to change timeout for TIME_WAIT state for only local
>> connections without any hazards?
>>
>> We have tested a hack where we replace timeout of TIME_WAIT state from a
>> value in procfs for local connections. This solves our problem and
>> application starts to work without any modifications to it.
>>
>> The question is that what can be the best possible solution here? Any
>> thoughts will be very helpful.
>>
>
> One solution would be to extend TCP diag to support killing TIME_WAIT sockets.
> (This has been raised recently anyway)
I think this has been raised here:
https://lore.kernel.org/netdev/ba65f579-4e69-ae0d-4770-bc6234beb428@gmail.com/

>
> Then you could zap all sockets, before re-starting your program.
>
> ss -K -ta src :listen_port
>
> Untested patch:
The following command and patch work for my use case. The socket in
TIME_WAIT_2 or TIME_WAIT state are closed when zapped.

Can you please upstream this patch?

>
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 9984d23a7f3e1353d2e1fc9053d98c77268c577e..1b7bde889096aa800b2994c64a3a68edf3b62434
> 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -4519,6 +4519,15 @@ int tcp_abort(struct sock *sk, int err)
> local_bh_enable();
> return 0;
> }
> + if (sk->sk_state == TCP_TIME_WAIT) {
> + struct inet_timewait_sock *tw = inet_twsk(sk);
> +
> + refcount_inc(&tw->tw_refcnt);
> + local_bh_disable();
> + inet_twsk_deschedule_put(tw);
> + local_bh_enable();
> + return 0;
> + }
> return -EOPNOTSUPP;
> }

--
Muhammad Usama Anjum

\
 
 \ /
  Last update: 2022-05-30 15:15    [W:0.094 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site