[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Shady areas of TCP window autotuning?

On Tue, 17 Mar 2009 10:39:13 -0500, Leo Bicknell wrote
> > So at the end of the day, we'll again have a system which is unable to
> > achieve good performance over high BDP paths, since with reduced buffers
> > we'll have an underbuffered bottleneck in the path which will prevent full
> > link untilization if RTT>50 msec. Thus all the above exercises will end up
> > in having almost the same situation as before (of course YMMV).
> This is an incorrect conclusion.  The host buffer has to wait for
> an RTT for an ack to return, so it has to buffer a full RTT of data
> and then some.  Hop by hop buffers only have to buffer until an
> output port on the same device is free. 


> However, if the hop-by-hop buffers are filling and there is lag and
> jitter, that's a sign the hop-by-hop buffers were always too large.
> 99.99% of devices ship with buffers that are too large.

Vendors size the buffers according to principles outlined e.g. here:


It's fine to have smaller buffers in the high-speed core, but at the edge you
still need to buffer for full RTT if you want to fully utilize the link with
TCP Reno. Thus my conclusion holds - if we reduce buffers at the bottleneck
point to 50 msec, flows with RTT>50 msec would suffer from reduced throughput.

Anyway we probably have no other chance in situations when the only available
queueing is FIFO. And if this gets implemented on larger scale, it could even
have a positive side-effect - it might finally motivate OS maintainers to
seriously consider deploying some delay-sensitive variant of TCP since Reno
will no longer give them the best results.