[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ih] Ping Eduardo A. Suárez (was Re: What is the origin of the root account?)

    > From: Jack Haverty <jack at 3kitty.org>

    > The V6 Unix OS primitives didn't provide any way to do such a "wait for
    > one of many" action .. The only way to tell if there was more data to
    > read was to read it, and if there wasn't, your process was blocked
    > until there was. There was no "non-blocking I/O".
    > ...
    > If you "forked" a child process, it could operate independently but
    > there was no way for the parent and child to communicate, except by the
    > child exiting with a result code. 

Err, not quite true. There were signals.

We used signals in the asynchronous I/O that we added to V6 to run our ring
LAN interfaces. One queued a read or write using the normal read/write
routines, and it returned instantly, uncompleted. Once it completed, it sent
the owning process the signal, and it then used the stty() and gtty() special
routines (theoretically 'character' devices only, but hey, you do what you
gotta do) to get the transfer completion information (count, status, etc). It
wasn't pretty, admittedly, but it worked.

As an example of what you could do with signals, someone at MIT came up with a
multi-player online word game called something like 'Perquakey' (I think it's
based on a real game - I recall it was sort of like a cross between Scrabble
and something else), which had a separate process reading each player's
keyboard - they coordinated (IIRC) with signals. People could join and leave
the game asynchronously. None of it involved any kernel mods, IIRC.

    > It would be fascinating to hear any other experiences from those early
    > TCP (or even NCP) implementations in other OSes. What obstacles did each
    > present and how were they overcome? I've never seen much written about
    > the internal issues and how they were handled within the different early
    > TCP implementations -  .. Multics

I know a bit about the Multics one. Multics of course had very powerful
software structuring tools (e.g. the ability to do a procedure call to code in
'another process'), which made doing TCP 'easy' in some ways.

The Multics code was (IIRC) structured as a daemon process, and a database
(who had which ports, un-acked transmit packet lists, etc) which was shared
via the medium of a group of routines which were notionally part of the
daemon, but which were called directly by applications running in the user's
process (e.g. TELNET, FTP). Initially it all ran unprotected (i.e. ring 4,
with world RW), but later on it was moved to a lower ring (not sure what else
they changed) - SUID would have come in useful there!

IIRC Multics' biggest challenge was that a Multics process was a rather
heavy-weight thingy, and waking up two of them in a row, as would be needed
for your typical incoming TCP packet (one to do the demux, and TCP; the other
to process the data) was rather expensive.