[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ih] Ping Eduardo A. Suárez (was Re: What is the origin of the root account?)

I think "networking" is probably more complex than the "forking
process" paradigm can handle.   We (Randy and I) looked at the
"forking" approach back in 1977 and still hit a roadblock.

Although you could implement "full-duplex" behavior, networking seems
to require a bit more -- perhaps called "coordinated full-duplex"
behavior.  By that I mean that the flow in one direction had to be
able to be influenced by the contents of the flow in the other
direction.   No matter what level you were at, your program can't
always determine what will happen next, and thus faces the dilemma of
what event to wait for.

The V6 Unix OS primitives didn't provide any way to do such a "wait
for one of many" action, or even to implement a very inefficient
polling scheme.  The only way to tell if there was more data to read
was to read it, and if there wasn't, your process was blocked until
there was.  There was no "non-blocking I/O".  Not so much a problem if
you're waiting for data from your disk, but if that data was coming
from a human at the other side of the country who had just left his
terminal and gone to lunch, you'd be waiting a long time.  If you
"forked" a child process, it could operate independently but there was
no way for the parent and child to communicate, except by the child
exiting with a result code.  Perhaps we could have done something like
fork a separate child process to handle each individual packet and
then die, or something along those lines, but the thought of so much
context switching seemed too unwieldy.  Also, we tried that elsewhere,
as part of porting the "upper layer" functions of the system we were
building from an existing PDP-10 implementation to the Unix
environment.  One piece was a server-type program that was ported from
Tenex to become a whole bunch of forking and plumbed Unix processes
(by Ray Tomlinson, IIRC).  It worked, but its performance results were
as atrocious as my 11 bits/second TCP.   Doing this kind of networking
stuff inside V6 Unix wasn't obvious or easy.....

"Coordinated full-duplex" is what you need for a TCP implementation to
receive a packet from its remote counterpart, send the contents to its
user program/process, and also send a packet back in the other
direction, at least with an ACK, and very preferably also with any
data queued to be sent in that direction.  One could imagine designing
a networking program (TCP, Telnet, FTP, etc.) into multiple processes,
but there didn't seem to be any good way to do the coordination, given
the V6 OS primitives of the day.  At least we, as Unix neophytes,
didn't see any.

Inside a real-time communications system of the day (e.g., the ARPANET
IMP code), this situation was easily handled by use of
interrupt-driven software.  When there's nothing more to do, go into
your "idle loop" and twiddle your electronic thumbs, with interrupts
enabled on all channels that you expect might give you the next thing
to do, whatever happens next.  Inside the IMP code (circa 1970) there
was a multi-process software structure similar to the Unix
parent/child/fork technique.  One set of code (and interrupts) handled
incoming traffic on a single port, another handled output traffic on
that same port, repeat for each physical port, etc.  They ran whenever
the appropriate hardware interrupt fired, and were all coordinated
through sharing of common memory.  Those handlers also had the ability
to issue a "software interrupt" -- i.e., to make sure that the "other
process" also ran soon (depended on his priority), as if the hardware
interrupt on which he was waiting had triggered.   That plus clock
interrupts created the environment in which you could implement things
like an IMP.  If you look at other such real-time communications
systems (Port Expanders, TIUs, TIPs, Packet Radios, Gateways, etc.)
you'll probably find a similar substrate.

If there had been space inside the PDP11/40 kernel, we would probably
have written TCP as a kernel module, and tweaked the existing
interrupt handlers of the kernel as needed to deal with the network
interface.  Still, that wouldn't have solved the problem for "higher
level" functions that you'd like to implement outside the kernel like
Telnet/FTP/etc.  AWAIT and CAPAC made such user processes feasible,
even on the 11/40 processor.  The subsequent 11/70 and Vax
implementations were able to be much more "native" with all that elbow
room that allowed code to be added to the kernel.  Of course, as I
said earlier, we were all Unix neophytes -- so we may have totally
missed something that was obvious to the Unix veteran at the time.

I suspect everyone who implemented TCP in those days faced similar
issues inside their machines.  Networking is really just a variant of
the "distributed multiprocessor" configuration, which was just
beginning to appear in computing in general.   It would be fascinating
to hear any other experiences from those early TCP (or even NCP)
implementations in other OSes.  What obstacles did each present and
how were they overcome?   I've never seen much written about the
internal issues and how they were handled within the different early
TCP implementations - Tenex, Multics, 360/91, etc.   I believe there
was NCP for Unix systems on the ARPANET, but I don't recall the
timing.  In any event, the "factory stock" V6 code we were given to
build that first TCP from didn't have any NCP code in it.

Thinking about this now, it's possible that Unix had some effect on
Networking, but perhaps Networking had a much more significant effect
on Unix, forcing the addition of primitives (sockets, etc.) needed for
distributed multiprocessing....i.e., creating that "substrate" for
that kind of software system.  Sounds like a decent PhD topic - "OS
Primitives Needed for Distributed Multiprocessing" - wonder if
anybody's written that in the last 40 years of networking.

/Jack Haverty

On Wed, Apr 17, 2013 at 10:56 PM, Tony Finch <dot at dotat.at> wrote:
> On 18 Apr 2013, at 03:30, Jack Haverty <jack at 3kitty.org> wrote:
> Randy Rettberg and I, both at BBN, took the TCP/Unix challenge.  We
> were both Unix neophytes.  After figuring out what we could (Lion's
> notes were a great help), we still didn't see any clean way to
> construct the common kinds of network programs inside the Unix
> environment.  In particular, it didn't seem possible to write a
> program that could serve a duplex information flow, where you couldn't
> predict from which direction the next piece of data would come.  I.E.,
> when the program was ready to go into an idle state and wait for more
> work to do, you could issue a "read" call to the kernel, specifying a
> file descriptor, and it would hang until data was available from that
> "file".  But if you picked the "wrong" fd to wait on for input, your
> program would wait forever.  How would a "telnet" program, for
> example, know whether its local human user would type another
> character next, or its remote partner across the net would send the
> next character for output to that user terminal.   There may have been
> a way to do this in Unix of the era, but we neophytes couldn't see it.
> Networking didn't seem to fit the Unix "concatenation of pipes"
> paradigm where input flows unidirectionally to output.
> Thanks for the interesting memories.
> The bit I quoted above reminded me of a bit of folklore from that era: for
> full-duplex comms, fork a child, and do opposing unidirectional comms in the
> parent and child processes. For example,
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=pdp11v/usr/src/cmd/net/net.c
> I can't remember now what was the usual example of a program that did this;
> "tip" perhaps, or is that too recent? I also don't know how much the trick
> was still used as networking became popular.
> Tony.
> --
> f.anthony.n.finch  <dot at dotat.at>  http://dotat.at/