[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ih] internet-history Digest, Vol 84, Issue 4

Guy - good list of issues.  As one of those "early TCP developers" I can at
least provide my answers.   Not sure how they generalize.  Everything is of
course IIRC - it's been almost 35 years....

There were a variety of things that we knew were important, but were
specifically and intentionally excluded from specification.   In many cases
it was because we didn't know what "the right answer" was, or at least
didn't agree.   The "congestion control algorithm" is one such example.

The Internet was new, it was tiny, and we had had little operational
experience with it.  We of course did experiments, but it wasn't obvious
that you could draw any generalizations from the results in "toy"
configurations that would apply to a larger net, with real traffic patterns
from real-world users.

Additionally, the core idea of an architecture like TCP was new.   Although
there were several pre-existing communications mechanisms in use at the
time (in particular IBM's), all of them were very homogeneous - i.e.,
designed and implemented by a single organization.   The ARPANET had
existed for a decade, but it was also homogeneous - all the IMPs and
software they contained were implemented and operated by BBN.

Additionally, the ARPANET, and other contemporary networks, tended to use a
hop-by-hop approach to implementing a reliable byte-stream service
analogous to TCP's core service.   TCP/IP chose a very different end-to-end
approach, allowing packets to be discarded in transit, or even carved up
into smaller pieces anywhere along the way.   Existing algorithms and
knowledge from the traditional "virtual circuit" networks simply didn't
obviously apply to the "datagram" approach of TCP/IP.

We were in uncharted territory.  That was of course ARPA's charter -
"Advanced Research", meaning trying things that hadn't been tried before.
There was a point in time where the Internet hit a fork in the road.  One
fork was to follow the tried-and-proven ARPANET path, where all of the
routers in the Internet would be designed and built by one organization.
The other fork was to structure the design so that many different designs
and implementations could be used, by different programmers, organizations,
etc.    We knew we could take the former fork and get an Internet to work,
by building from the ARPANET experience, incorporating the internal
mechanisms (including congestion control and flow control) of the ARPANET
into the routers.   We didn't know if it was even possible to go down the
other fork, which was "where no man had gone before".    As we all know,
we're on that second fork and it's been working far better than I think
anyone anticipated.   Google "Kahn Haverty subway strap" if you'd like the
story in an old posting to this list.

There had been a lot of work on the internal algorithms of the ARPANET,
i.e., the protocols used between IMPs, and the algorithms used for traffic
management, error control, et al.  Congestion control was one of the hot
topics, since the ARPANET had grown large and complex enough, with large
and diverse data flows associated with many users, and had been
experiencing congestion events.

I was probably more aware of the ARPANET work than others, since I was
working at BBN and in the same group that was responsible for the ARPANET.
  There was a large group of scientists and mathematicians involved in
observing ARPANET behavior, creating and deploying new algorithms, and
evaluating the results in the live net.

Congestion control was not a solved problem in the ARPANET.   Even if it
was, it wasn't clear that the techniques used in the ARPANET et al would be
applicable, or effective, in the TCP/IP world.

The various "suggestions" in the early TCP RFCs were explicit cases of the
specification declining to select any particular design.  Rather it was
important to have a specification that permitted the research in areas such
as congestion control to continue, developing new ideas and testing them
out in the TCP/IP environment.   In other words, the specification
explicitly did not specify any particular required algorithm.   If it did,
we had no confidence that it would be the right one, and such a constraint
might have ruled out later work done by Van and others.

The Internet was based on interoperability, and the continued ability to
communicate in spite of differences between the parties involved.
 Different computers, different networks, different software, different
algorithms, different people, different organizations, different
technologies....    Conformity to core design elements, like the basic
packet formats and the TCP state machine, was needed to achieve
interoperability.   In other design choices, like congestion control
algorithms, user APIs, packetization algorithms, etc., interoperability
could still be attained while permitting diversity.  i think (and thought
at the time) that the ability to have such diversity was very important to
allow new ideas to be introduced and to allow the Internet to evolve as it
grew in operational use and the "lab" of our research work hit the "real

The Internet was after all explicitly a Research project.  The focus was on
getting it to work at all, i.e., getting data to flow, and getting a system
operational so that other people could get involved, try out new ideas, and
collectively figure out what worked and what didn't.  It would be
interesting historically to note when The Internet stopped being a Research
project and became Operational.   Or has that not yet happened?

Re: Did We Recognize It Was Important --  At every ICCB (precursor to IAB)
meeting there was a list on the corner of the whiteboard of ongoing issues
that Vint selected and Jon recorded.  It listed the important things that
needed to be done, but that we didn't know how to do.   Congestion control
was one of them.  Others I recall were Expressway Routing, Multiple-Homing,
and Multi-Path.  There were maybe a dozen in all.   I wonder how many of
them could now be checked off as done.

We knew that such things were going to cause problems and that the "naive
algorithms" would cause trouble.   We also knew that we didn't know "the"
right answer (or even any answer at all) and that experimentation would be

We also knew that TCP4 had a limited lifetime, and such problems could be
addressed with enhancements in TCP5, TCP6, etc.   We had gone from TCP2 to
TCP3 to TCP4 in a year or two, so TCP 5 could be expected within a year,
and hopefully by then the naive algorithms wouldn't have wreaked too much
havoc and could be replaced.   But of course we really got that schedule
wrong...I at least didn't appreciate how making something a Standard would
cause it to set in concrete so quickly and firmly.   I recall one meeting
where we groused to Vint that "we're not done yet!" and TCP wasn't ready to
Standardize.  Of course researchers always say that, and like others we
lost too.

Regarding "best practices" ... there seems to be an implicit assumption
that there is always such a thing as a "best practice" and you just have to
find it.   The Internet as a technology is very complex, and it is used in
a wide range of situations.

The 1980-ish Internet was designed with specific scenarios in mind,
reflecting systems that would actually use TCP.   For example, one scenario
involved military personnel in aircraft or jeeps (packet radio for comms)
interacting with command staff at HQ (land and satellite based comms) as
well as with ships at sea (satellite based comms from unstable in-motion
platforms).   Toss in some electronic countermeasures and the general chaos
of a battlefield situation, all of which cause packet loss, unpredictable
connectivity, and changing traffic patterns.

The 2014-ish Internet seems to now be dominated by email, streaming video,
web browsing, multi-gigabit LANs and WANs, and the other activities of
several billion users, virtually all of whom have computer power exceeding
the aggregate of all the users on the 1980-ish Internet and generating huge
amounts of traffic (my speculation only, but you get the idea).

I'm not convinced that *any* congestion control algorithm is applicable to
such a wide range of environments.   Even in the 1980 timeframe we had
different TCP implementations using different algorithms that their authors
thought were appropriate for the environment in which that implementation
would be used.   The environments of a high-speed (for the time) LAN versus
a lengthy string of terrestrial and satellite networks have very different
characteristic behaviors in terms of delay, packet loss rates, variance,
and other such parameters that are important to something like a congestion
control algorithm.

In some of the later RFCs, the IETF seems to have picked certain algorithms
and declared them to be Required.   So maybe it is possible to nail down
"the" correct algorithm.   I can't tell if today's TCPs in use actually
conform to that Requirement though.  If so, I guess it works - at least the
Internet still seems to work amazingly well, at least from my perspective
now as a User.   But does it work because there's a single best practice
algorithm in universal use?   Or because there's not...?

My $0.02,
/Jack Haverty

On Wed, May 21, 2014 at 6:25 AM, Guy Almes <galmes at tamu.edu> wrote:

> Detlef et al.,
>   The subtlety and difficulty and importance of TCP congestion control
> algorithms are indeed worthy of discussion: more now with our 100-Gb/s
> wide-area networks than in the early days of TCP/IP.
>   But I'd suggest that, for this list, attention be focused on a few
> issues.
> <> Clarity on the degree to which the early TCP RFCs were pretty clear
> about the protocol, but only suggestive about an example congestion control
> algorithm.
> <> Clarity on the degree to which the authors of the early TCP RFCs did
> not recognize the importance of developing very good congestion control
> algorithms.
> <> Clarity on the degree to which the early TCP developers did or did not
> view as of any importance conformity by different TCP implementations of
> the best (or set of almost best) practices in congestion control algorithms.
> <> Clarity on how/when it began to become evident that the naive
> algorithms documented in the TCP RFCs and used in early testing would
> themselves become the source of trouble.
>   Even today, confusion between "TCP the protocol" vs "TCP the set of
> common congestion control algorithms used in practice" persists.  But, for
> this list, I'm interested in the state of clarity vs confusion in these
> matters early on.
>   Regards,
>         -- Guy
> On 5/21/14, 7:37 AM, Detlef Bosau wrote:
>> This does not really answer my original question, I consider asking Van
>> directly, but I see that TCP resembles swabian "K?ssp?tzle". (cheesy
>> noodles.) Everyone has his own recipe, there is not "that one standard"
>> and the real clues in preparing them aren't written in any textbook.
>> Am 19.05.2014 22:45, schrieb Jack Haverty:
>>> Hi Bob,
>>> That sounds about right.   IIRC, there were a lot of TCP
>>> implementations in various stages of progress, as well as in various
>>> stages of protocol genealogy - 2.5, 3, 4, and many could communicate
>>> with themselves or selected others prior to January 1979.  Jon's
>>> "bakeoff" on the Saturday preceding the January 1979 TCP Meeting at
>>> ISI was the first time a methodical test was done to evaluate the NxN
>>> interoperability of a diverse collection of implementations.
>>> I remember that you were one of the six implementations in that test
>>> session.   We each had been given an office at ISI for the day and
>>> kept at it until everyone could establish a connection with everyone
>>> else and pass data.
>>> There were a lot of issues resolved that day, mostly having to do with
>>> ambiguities in the then-current spec we had all been coding to meet.
>>> As we all finally agreed (or our code agreed) on all the details, Jon
>>> tweaked the spec to reflect what the collected software was now doing.
>>>   So I've always thought that those six implementations were the first
>>> TCP4 implementations to successfully interoperate.  Yours was one of
>>> them.
>>> There was a lot of pressure at the time to get the spec of TCP4 nailed
>>> down and published, and that test session was part of the process.
>>>  Subsequently that TCP4 spec became an RFC, and a DoD Standard, and
>>> The Internet started to grow, and the rest is history....
>>> I wonder if Dave Clark ever forgave Bill Plummer for crashing the
>>> Multics TCP by innocently asking Dave to temporarily disable his
>>> checksumming code....and then sending a kamikaze packet from Tenex.
>>> /Jack
>>> On Mon, May 19, 2014 at 11:43 AM, Bob Braden <braden at meritmail.isi.edu
>>> <mailto:braden at meritmail.isi.edu>> wrote:
>>>     Jack,
>>>     You wrote:
>>>         I wrote a TCP back in the 1979 timeframe - the first one for a
>>>     Unix
>>>         system, running on a PDP-11/40.  It first implemented TCP version
>>>         2.5, and later evolved to version 4.   It was a very basic
>>>         implementation, no "slow start" or any other such niceties
>>>     that were
>>>         created as the Internet grew.
>>>     I have been trying to recall where my TCP/IP for UCLA's IBM 360/91
>>>     ran in this horse race. The best I can tell from IEN 70 and IEN 77
>>>     is that  my TCP-4 version made it between Dec 1978 and Jan 1979,
>>>     although I think I had an initial TP-2.5 version talkng to itself
>>>     in mid 1978.
>>>     Bob Braden
>> --
>> ------------------------------------------------------------------
>> Detlef Bosau
>> Galileistra?e 30
>> 70565 Stuttgart                            Tel.:   +49 711 5208031
>>                                             mobile: +49 172 6819937
>>                                             skype:     detlef.bosau
>>                                             ICQ:          566129673
>> detlef.bosau at web.de                      http://www.detlef-bosau.de
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20140521/2ad63b6b/attachment.html>