[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Shady areas of TCP window autotuning?

In a message written on Mon, Mar 16, 2009 at 10:15:37AM +0100, Marian ??urkovi?? wrote:
> This however doesn't seem to be of any concern for TCP maintainers of #2,
> who claim that receiver is not supposed to anyhow assist in congestion
> control. Instead, they advise everyone to use advanced queue management,
> RED or other congestion-control mechanisms at the sender and at every
> network device to avoid this behaviour.

I think the advice here is good, but it actually overlooks the
larger problem.

Many edge devices have queues that are way too large.

What appears to happen is vendors don't auto-size queues.  Something
like a cable or DSL modem may be designed for a maximum speed of
10Mbps, and the vendor sizes the queue appropriately.  The service
provider then deploys the device at 2.5Mbps, which means roughly
(as it can be more complex) the queue should be 1/4th the size.
However the software doesn't auto-size the buffer to the link speed,
and the operator doesn't adjust the buffer size in their config.

The result is that if the vendor targeted 100ms of buffer you now
have 400ms of buffer, and really bad lag.

As network operators we have to get out of the mind set that "packet
drops are bad".  While that may be true in planning the backbone
to have sufficient bandwidth, it's the exact opposite of true when
managing congestion at the edge.  Reducing the buffer to be ~50ms
of bandwidth makes the users a lot happier, and allows TCP to work.
TCP needs drops to manage to the right speed.

My wish is for the vendors to step up.  I would love to be able to
configure my router/cable modem/dsl box with "queue-size 50ms" and
have it compute, for the current link speed, 50ms of buffer.  Sure,
I can do that by hand and turn it into "queue 20 packets", but that
is very manual and must be done for every different link speed (at
least, at slower speeds).  Operators don't adjust because it is too
much work.

If network operators could get the queue sizes fixed then it might
be worrying about the behavior you describe; however I suspect 90%
of the problem you describe would also go away.

       Leo Bicknell - bicknell at ufp.org - CCIE 3440
        PGP keys at http://www.ufp.org/~bicknell/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 187 bytes
Desc: not available
URL: <http://mailman.nanog.org/pipermail/nanog/attachments/20090316/c83fec04/attachment.bin>