[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
TCP time_wait and port exhaustion for servers
- Subject: TCP time_wait and port exhaustion for servers
- From: kyrian at ore.org (Kyrian)
- Date: Thu, 06 Dec 2012 13:25:28 +0000
- In-reply-to: <[email protected]>
- References: <[email protected]>
On 5 Dec 2012, rps at maine.edu wrote:
> > Where there is no way to change this though /proc
> Those netfilter connection tracking tunables have nothing to do with the
> kernel's TCP socket handling.
No, but these do...
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 90
net.ipv4.tcp_fin_timeout = 30
I think the OP was wrong, and missed something.
I'm no TCP/IP expert, but IME connections go into TIME_WAIT for a
period pertaining to the above tuneables (X number of probes at Y
interval until the remote end is declared likely dead and gone), and
then go into FIN_WAIT and then IIRC FIN_WAIT2 or some other state like
that before they are finally killed off. Those tunables certainly seem
to have actually worked in the real world for me, whether they are
right "in theory" or not is possibly another matter.
Broadly speaking I agree with the other posters who've suggested
adding other IP addresses and opening up the local port range available.
I'm assuming the talk of 30k connections is because the OP's proxy has
a 'one in one out' situation going on with connections, and that's why
your ~65k pool for connections is halved.