[IMR] IMR87-02.TXT Westine [Page 1] ~ FEBRUARY 1987 INTERNET MONTHLY REPORTS ------------------------ The purpose of these reports is to communicate to the Internet Research Group the accomplishments, milestones reached, or problems discovered by the task forces and contractors in the Internet Research Program. This report is for research use only, and is not for public distribution. Each task force and contractor is expected to submit a 1/2 page report on the first business day of the month describing the previous month's activities. These reports should be submitted via Internet mail to Westine@ISI.EDU. Reports are requested from BBN, ISI, LL, MIT-LCS, NTA, SRI, UCL, and UDEL. Other groups are invited to report newsworthy events or issues. BBN LABORATORIES AND BBN COMMUNICATIONS CORPORATION --------------------------------------------------- WIDEBAND NETWORK BSAT software Release 2.2 was distributed to the Wideband Network sites this month. The new BSAT software runs under Chrysalis operating system Release 2.3.1, and is built using a new C compiler Westine [Page 1] Internet Monthly Report February 1987 which generates more efficient code than the previously used compiler. The release also includes some minor operational enhancements. The Ft. Monmouth Wideband site were brought back up on the channel. The BSAT and Butterfly Gateway are now up and running. The initial implementation of the BSAT's stream scheduling synchronization software has been completed and debugged in a test network environment. Release of this software will occur early next month after final testing and integration is performed. VAX NETWORKING In the month of February, implementation into 4.3bsd Unix of the Inter-Agent protocol for passing group membership information between Multicast Agents was completed. Debugging and testing of the implementation are taking place on systems at BBN and at Stanford University. Multicast Agents are currently running on one machine at each of BBN and Stanford University. Non-Agent 4.3bsd multicast kernels are currently running on several Stanford systems, and on one BBN system. The wider and more diverse use of the 4.3bsd multicast implementation allows for testing and exercising of the more complex parts of the system. A second release of the 4.3bsd multicast implementation was made available to Eric Cooper at Carnegie-Mellon University. This release includes the inter-agent protocol and numerous bug fixes to the release of 18 December 1986. SATNET The SATNET has been very stable all month. There were no SIMP software crashes and no new SIMP hardware problems. Two sites, Tanum and Fucino, are still not using channel 1, but the adjustments made to channel 0 in January have kept the network healthy. We are awaiting the successful testing of the Linkabit modems to repair channel 1. We have switched our monitoring system over to a new C70 (HNOC). The SIMPs now report to two hosts. The ability to switch between the two hosts should give us more reliability as far as monitoring and control of the network. Periodically we have seen the links between the SIMPs and Gateways flapping. We are trying to isolate the problem. Westine [Page 2] Internet Monthly Report February 1987 ARPANET STATUS The ARPANET experienced another period of extreme congestion beginning in late January and lasting roughly through mid-February. During this period service across gateways was particularly bad. Over the past six months, transmission capacity in the ARPANET has been critical, especially on the network's cross-country paths. The congestion experienced in January and February was due primarily to a modest increase in traffic that pushed the ARPANET "over the edge". Two significant short-term steps were taken to alleviate the congestion. First, parameters in the network's packet-switching nodes were adjusted to increase the stability of routing along the network's cross-country paths. Second, the software of core gateways running EGP was modified to reduce the rate at which gateways transmit messages such as Hello/IHeardYou, Network Reachability messages, traps, etc. The combined effect of these actions has been to reduce the mean round trip delay seen in the ARPANET by nearly half (620 ms on 23 February vs. 1170 ms on 5 February), and to increase the overall throughput of the network by about 25%. Interestingly, the average message size in the ARPANET has increased; this may be the result of the network "opening" itself more to multipacket messages. The number of 5-packet messages has doubled, and the number of 8-packet messages has increased by a factor of 2.3. While these improvements are real, it should be noted that the fixes made were "band-aid" fixes -- and the box of band-aids is very nearly empty! The long-term solution to ARPANET traffic problems is, of course, additional bandwidth. DCA has initiated actions to procure needed trunking capacity. In addition, several ARPANET nodes will soon be upgraded from C/30E's to C/300s, increasing switching capacity at several locations where that has also been a problem. These actions, together with the network expansion that will soon be taking place, should result, in the long term, in improved ARPANET performance. In the short term, however, we may continue to see congestion, especially if traffic continues to increase before additional trunking capacity is installed. GATEWAYS The Butterfly Gateways continue to be stable. The Wideband Butterfly Gateway at Ft. Monmouth was reinstalled. Except for when a squirrel got into one of the bases substation and vaporized itself which took down the power, the site has been stable. There are now 21 Butterfly Gateways installed. We are seeing some problems with the Ethernet interfaces on a few Westine [Page 3] Internet Monthly Report February 1987 of the Wideband gateways which cause the interface to freeze. This is being investigated. We made several changes to the core LSI-11 gateways which run EGP to reduce the rate which they exchange updown and reachability messages and made them try harder to keep their EGP connections up. This has reduced traffic (which helps the ARPANET) and has make the overall system more stable. Bob Hinden ISI --- Internet Concepts Project Visitors to ISI were Col. Alex Lancaster (USMC) to discuss ISI operations, and Mary Stahl and Sue Romano from the NIC to discuss RFC formats and the change over of network number assignments to the NIC as of 1-Mar-87. Jon Postel attended the IAB Meeting at SRI, 3-4 Feb 87, and the Congressional FCCSET meeting in San Diego Feb 18. Paul Mockapetris attended the Internet Engineering Task Force meeting in San Jose, CA, Feb 4-6, and the DSAB Naming Task Force meeting at SRI Feb 26. Two RFCs were published: RFC 989: John Linn, IAB Privacy Task Force "Privacy Enhancement for Internet Electronic Mail: Part I: Message Encipherment and Authentication Procedures". RFC 996: D.L. Mills, "Statistics Server". Multimedia Conferencing Project We are investigating the possibility of adapting a commercial video codec for use on the Wideband Net to expand the number of packet video sites. Since the commercial codecs are not very tolerant of lost packets, we need to understand the packet loss rate and distribution for video traffic on the Wideband Net. We have augmented the Packet Video Protocol implementation to allow the detection of missing packets (our experimental packet video system doesn't care about missing packets), so we can now run video for an extended period of time to collect loss information. Brian Hung has integrated the document scanning and multimedia software into one program. Brian is looking at ways of improving the program such as reducing the time it takes to reduce the scanned image and display it on the screen. Brian will also start working on adding the text capability to the existing program so Westine [Page 4] Internet Monthly Report February 1987 that a message can include both bitmap and text media. Steve Casner and Brian Hung NSFNET Project February must have been National Meeting Month. Bob Braden attended the following meetings: the IAB at SRI; one day of the INENG Task Force at NASA Ames; the NSFNET Federation Assembly in San Diego; and the Computer Network Study Workshop of FCCSET, also in San Diego. Finally, Bob presented a discussion of the Internet architectural model and its corruptions at a one-day SURANET technical meeting at the University of Delaware. Annette DeSchon continued work on a background file transfer server. This server will allow a user to submit a request for a reliable file transfer to take place in the future. A message reporting the results will be returned to a specified mailbox when the transfer has been completed. In addition, she began collecting information on the current topology of the NSFNET. This information will be used to generate maps and host tables in cooperation with the NNSC. Work continued on updating the gateway specification RFC985. Opinions were gathered from the Internet technical community on several controversial points in the specifications. For example, a straw-man proposal that gateways not implement ICMP Source Quench led to a significant discussion, to which many contributed. The concensus was that Source Quench, for all its known deficiencies, is better than nothing. Another hotly debated topic concerned Host Redirects vs. Net Redirects. Finally, there was a concensus AGAINST adopting RFC975 (Autonomous Confederations); this leaves us with a great quandry about what the vendors should do about EGP. Supercomputer and Workstation Communication Project We have achieved a data rate of 1 Mb/s of user-host-generated datagram traffic across the Wideband Net. This was done with ICMP echo pings as part of a series of tests to better understand and isolate the various factors that might affect NETBLT protocol operation over the Wideband Net. The 1 Mb/s rate is close to the calculated total bandwidth currently available on the channel. The traffic was bidirectional: 500 Kb/s (packets of 1400 data bytes every 22ms) going from ISI to the BBN BSAT Echo Host and back to ISI. Two Suns were used, each generating half the traffic, to avoid packet loss in reception of the returned packets. Continued a survey of NSF supercomputer sites, focusing on San Diego's CTSS Cray and Pittsburgh's COS Cray systems. Since Pittsburgh's system is batch oriented (with a VAX VMS front end), and is one of the few sites with Internet FTP working (although its Westine [Page 5] Internet Monthly Report February 1987 just a user FTP, no server), we should be able to do some interesting experiments, where all workstation/ supercomputer communication is via file transfer. Also, Pittsburgh's front end VAX appears to be on the same local network as a Wideband gateway, so we should be able to FTP via satellite. Continued experimentation with X-Windows. Alan Katz is setting up a standard window environment that will come up when you log in (sort of like suntools), but that can be run from any X-Windows client. Experimented with the various window managers. Alan Katz received a PhD in Physics from UCLA. Steve Casner and Alan Katz MIT-LCS ------- No report received. NTA & NDRE ---------- No report received. ROCKWELL INTERNATIONAL ---------------------- Tactical Internet Multicast Protocols As reported in December's Internet Monthly, David Young has been designing extensions to the multicast protocols presented in RFCs 966 and 988. His goal has been to increase the protocols' efficiency in limited-bandwidth, multi-hop networks and to support host management of multicast groups and their rapid forming/disbanding - functions envisioned as being required for tactical scenarios. During the past two months, David designed the detailed changes to the IP service interfaces and the corresponding changes to the procedures by which multicast groups are formed. He documented these changes in a draft RFC and submitted it to the End-to-End Services Task Force for review prior to official release. The key points are summarized as follows: 1) Add a group membership parameter to the IP service interface function calls. This allows a managing process to specify the members of the group as it is created and maintained. Westine [Page 6] Internet Monthly Report February 1987 2) Expand the IP to subnetwork service interface to encompass the same functionality as the user to IP service interface. This reflects the parallel operations performed in the internet and the subnetworks during the formation of a multicast group. 3) Enhance the gateway multicast function to make it recognize the group membership list parameter of the IGMP Create Group Request. This implies also that, upon receipt of an IGMP create group request with a non-empty membership parameter, it would send unsolicited IGMP Create Group Replies to one group member in every subnetwork outside of the managing host's subnetwork. 4) Upon creation of a host group in the subnetwork, send an ICMP message via IP multicast to inform the IP modules of each host group member of their membership in the group. A corresponding multicast would occur at the transport layer to activate the process group. 5) Allow captive multicast addresses to be used for multicasting in a subnetwork when a multicast agent is not available to assign a host group address. A preferable alternative would be the following proposal. 6) Take the host group address assignment function out of the gateway and put it in the host. A host creating a host group can derive the group address from the socket address of the originating process. This can then be adapted to the IP header by using the options field to hold the upper part of the address that will not fit in the 32 bit destination address. John Jubin SRI --- No report received. UCL --- UCL have now put up Diamond Release 3.0 succesfully, and have been exchanging mail (including photos) locally and with the US (BBN and UMich). To do this also required the UCL Domain Nameserver to be mildly overhauled. TTL fields are still giving some trouble. The UCL Sequential Exchange Protocol has been ported to the MIT PC/IP environment and is undergoing preliminary tests. Westine [Page 7] Internet Monthly Report February 1987 The ISODE/Pepy to build automatic presentation level code generators is nearly complete. This will eventually be part of the ISODE kit. Work has started on instrumenting and rate controlling the BSD TCP code so it can be used as a standard measurement tool. John Crowcroft UNIVERSITY OF DELAWARE ---------------------- 1. Development continues on the Dissimilar Gateway Protocol (DGP). The data-base design and transmission model are almost complete. The structure of the routing algorithm is taking shape, as well as the rules for data exchange between systems. A document describing this work has been distributed for preliminary review and will be distributed to the task forces shortly. Meanwhile, development continues on features to support DGP in the fuzzball and Unix 4.3bsd systems. A distributed-simulation package from Columbia is being evaluated as well. 2. Mike Minnich continues gearing up for a heavyweight assault on protocol design and performance issues, in particular TCP retransmission and gateway queueing strategies. Mike is familiar with the work of Van Jacobson, Lixia Zhang and John Nagle and has collected an awsome stash of statistics and simulation packages, as well as my own trove of Internet mesasurements. 3. The fuzzball software configurations at Linkabit, Ford and U Delaware have finally stabilized, as well as the USECOM Patch Barracks (Stuttgart) MILNET outpost. A new gateway DCN-GW connects the old DCNET swamp and serves as backup for the primary UDel gateway. Just for fun the UDel fuzzballs are multi-homed with DCNET and ARPANET (port expander) addresses to keep the routing algorithm warm. Transferring update files to Patch turned out to be astonishingly hard, due to poor network performance in general and also because of PSN interface bugs (also observed elsewhere). 4. The addition of Patch to the Network Time Protocol (NTP) peer group now extends the span of synchronized clocks to Europe. However, preliminary performance evaluation indicates the phase jitter on the European MILNET segment often exceeds the capability of the synchronizing algorithm to deliver continuous time, so that frequent clock resets occur. The Westine [Page 8] Internet Monthly Report February 1987 algorithm is being studied with the intent of providing adaptive parameters responsive to observed path characteristics. Dave Mills Westine [Page 9] Internet Monthly Report February 1987 NSF NETWORKING -------------- The NSFnet is an internet originally designed to provide access to NSF-funded supercomputers. Prospective supercomputer sites initially proposed their own consortium networks. These access networks were later augmented by a nationwide backbone which interconnects the supercomputer sites. Additional proposals were received and funded by the NSF to enrich the NSFnet infrastructure by building new regional networks and attaching them to the backbone, and by helping existing networks to attach. The NSFnet backbone sites are located at Cornell University, the University of Illinois (Urbana Campus), the John von Neumann Center in Princeton, the National Center for Atmospheric Research, the Pittsburgh Supercomputing Center and the San Diego Supercomputing Center. Regional/consortium networks currently existing or being implemented are the Bay Area Regional Research network (BARRnet), the consortium networks of the John von Neumann Center and the San Diego Supercomputer Center and the network of the Pittsburgh Supercomputer Academic Affiliates (PSCAAnet), WESTnet, NORTHWESTnet, NYSERnet in New York State, MIDnet in the Midwest, SESQUInet in the Houston area, SURAnet in the Southeast, and the University SAtellite Network pilot project (USAN). Pre-existing networks being connected include the Merit Computer Network in Michigan, CSNET, and the ARPANET - which is being augmented to include a number of NSF-specified sites. Active collaboration and coordination between DARPA and NSF is carried out under the terms of a Memorandum of Understanding signed by the Director of each agency. The NSFnet has already a richly connected infrastructure, building upon and partly using DARPA- developed protocols and systems; DARPA's pioneering network research has played an important role in the implementation of the NSFnet, and many participants in the DARPA-funded efforts are also active in helping the ambitious NSFnet program to succeed. Today the NSFnet is tightly coupled to and reachable by the Internet, of which it is a major component. The NSFnet backbone now contains two transcontinental links; although they currently run at 56 kb/s, there are plans to upgrade them to T1 by the end of the year. It is anticipated that there will be a permanent NSFnet Operations and Management Center later this year. Until then Ed Krol of the UIUC serves as the acting director of the NSFnet backbone and Cornell University runs the interim O&M center. BBN is running the NSFnet Network Service Center, which is an NSFnet equivalent to the Westine [Page 10] Internet Monthly Report February 1987 Arpanet NIC at the SRI. Several other people are involved in keeping the NSFnet running. However, the following is a first list of people who are responsible for parts of the overall NSFnet: NSFnet Backbone sites: Alison Brown Cornell (607)255-8686 alison@tcgould.tn.cornell.edu Scott Brim Cornell (607)255-9392 swb@devvax.tn.cornell.edu Ed Krol Illinois(217)333-1637 krol@uxc.cso.uiuc.edu Charlie Catlett Illinois(217)333-1637 catlett@uxc.cso.uiuc.edu Brian Gould JVNC (609)520-2000 gould@jvnca.csc.org Sergio Heker JVNC (609)520-2000 heker@jvnca.csc.org Joe Choy NCAR (303)497-1222 choy@scdsw1.ucar.edu Don Morris NCAR (303)497-1282 morris@scdsw1.ucar.edu Mike Levine PSC (412)268-4960 levine@cpwpsca.bitnet Jim Ellis PSC (412)268-6362 ellis@morgul.psc.edu Fred McClain SDSC (619)534-5045 mcclain@sdsc-sds.arpa Paul Love SDSC (619)534-5043 loveep@sdsc-sds.arpa NSFnet backbone support: Scott Brim Cornell (607)255-9362 swb@devvax.tn.cornell.edu Craig Callinan Cornell (607)255-5060 craig@devvax.tn.cornell.edu Dave Mills UDEL (302)451-8247 mills@udel.edu Mike Petry UMD (301)454-2946 petry@trantor.umd.edu Hans-Werner Braun UMich (313)763-4897 hwb@mcr.umich.edu NSF supported networks: Bill Yundt BARRnet (415)723-3909 gd.why@forsythe.stanford.edu Tom Ferrin BARRnet tcf@cgl.ucsf.edu Dick Edmiston CSNET (617)497-2777 edmiston@sh.cs.net Dennis Perry DARPA (202)694-4002 perry@vax.darpa.mil Ed Krol Illinois(217)333-1637 krol@uxc.cso.uiuc.edu Charlie Catlett Illinois(217)333-1637 catlett@uxc.cso.uiuc.edu Brian Gould JVNC (609)520-2000 gould@jvnca.csc.org Sergio Heker JVNC (609)520-2000 heker@jvnca.csc.org Eric Aupperle MERIT (313)764-9423 eric_aupperle@um.cc.umich.edu Hans-Werner Braun MERIT (313)763-4897 hwb@mcr.umich.edu Hellmut Golde NORTHWESTnet(206)543-0070 golde@cs.washington.edu Westine [Page 11] Internet Monthly Report February 1987 John Sobolewski NORTHWESTnet (206)543-5970 83486@UWACDC.BITNET Richard Mandelbaum NYSERnet (716)275-2916 rma@rochester.arpa Bill Schrader NYSERnet (607)255-8686 cu2@cornellc.bitnet Marty Schoffstall NYSERnet (518)271-2654 schoff@nic.nyser.net Mark Meyer MIDnet (402)472-5108 mark@unlcdc3.bitnet Doug Gale MIDnet (402)472-5108 doug@unlcdc3.bitnet Mike Levine PSCnet (412)268-4960 levine@cpwpsca.bitnet Jim Ellis PSCnet (412)268-6362 ellis@morgul.psc.edu Fred McClain SDSCnet (619)534-5045 mcclain@sdsc-sds.arpa Paul Love SDSCnet (619)534-5043 loveep@sdsc-sds.arpa Guy Almes SESQUInet(713)527-4834 almes@rice.edu Farrell Gerbode SESQUInet farrell@rice.edu Jack Hahn SURAnet (301)454-6030 @umd2.umd.edu:hahn@umdc.umd.edu Glenn Ricart SURAnet (301)454-4323 glenn@umd5.umd.edu Joe Choy USAN (303)497-1222 choy@scdsw1.ucar.edu Don Morris USAN (303)497-1282 morris@scdsw1.ucar.edu Pat Burns WESTnet (303)491-7709 pburns@csugreen.bitnet Dick Jonsen WICHE (303)497-0200 General NSFnet responsibilities: Bob Braden ISI (213)822-1511 braden@isi.edu Jon Postel ISI (213)822-1511 postel@isi.edu Bernie O'Lear NCAR (303)497-1205 olear@scdsw1.ucar.edu Steve Wolff NSF (202)357-9717 steve@note.nsf.gov Dan VanBelleghem NSF (202)357-9717 dvanbell@note.nsf.gov Stan Ruttenberg UCAR (303)497-8998 stan@sh.cs.net Dave Farber UDEL (302)451-1163 farber@udel.edu Dave Mills UDEL (302)451-8247 mills@udel.edu Ed Krol Illinois(217)333-1637 krol@uxc.cso.uiuc.edu Hans-Werner Braun UMich (313)763-4897 hwb@mcr.umich.edu The NSFnet community intends to provide status reports to keep the community informed about ongoing progress and projects that are being worked on. This is the first version of these reports. This being the first one resulted in some differences in the scope of the individual reports. It turned out that one of the reports, the JVNCnet report, included much more information than the other ones. This was leading us to the idea of featuring a particular network more extensively in upcoming monthly reports. For next month Paul Love of the San Diego Supercomputing Center has offered to feature SDSCnet. Westine [Page 12] Internet Monthly Report February 1987 The following are the individual reports from individual sites. NNSC (NSFnet Network Services Center) ------------------------------------- The NSF Network Service Center (NNSC) provides general information regarding the current state of NSFNET, including the NSF-supported component networks and supercomputer centers. The NNSC, located at BBN Laboratories Inc., is an NSF-sponsored project of the University Corporation for Atmospheric Research. The NNSC has information and documents available on-line and in printed form. It will distribute current news through network mailing lists, bulletins, newsletters, and on-line reports. It maintains a database of contact points and sources of additional information about the NSFNET component networks and supercomputer centers. As a central information service, the NNSC is available as an initial contact point for questions about using NSFNET when prospective users do not know whom to call. The NNSC will answer general questions. For detailed information relating to specific components of NSFNET, the NNSC will help users find the appropriate source for further assistance. The NNSC will encourage development and identification of local campus network technical support to better serve NSFNET users in the future. In addition, the NNSC will help prospective remote users of the NSF Supercomputer Centers use NSFNET to access those centers. Users may reach the NNSC by calling the hotline at 617-497-3400 or by electronic mail at nnsc@nnsc.nsf.net. Karen Roubicek, NNSC User Liaison, roubicek@SH.CS.NET NSFnet backbone sites ===================== The interim NSFnet backbone consists of seven links, tying together six IP gateways. These gateways are all LSI 11/73 systems running Fuzzball software. These Fuzzball gateways will be phased out when the more permanent NSFnet T1 backbone is installed and declared to work reliably. The Fuzzball systems were originally chosen as they were readily available dedicated IP switches supporting a routing protocol which can handle complicated topologies. Westine [Page 13] Internet Monthly Report February 1987 Cornell University, Theory Center --------------------------------- Besides running a national supercomputer center, the Cornell Theory Center is responsible for network operations for NSFNet and for backup operations for NYSERNet (the New York State regional network). Current activities: Software is in place so that personnel are notified whenever an NSFNet link fails. We currently report on all and error conditions traffic through the fuzzballs (reports are available by anonymous FTP to tcgould.tn.cornell.edu, in directory "nsfnet_traffic"). Starting in the beginning of March statistics will be available for the SURANet gateway. We have also begun software to analyze output from the gatedaemon, to look for excessive route flapping, patterns in round-trip delay variations, and so forth. Reports on NYSERNet are not yet available. See the appendix for a report on "gated". Scott Brim, swb@devvax.tn.cornell.edu University of Illinois, Urbana Campus -------------------------------------- (NCSA, National Center for Supercomputer Applications) ------------------------------------------------------ Charlie Catlett wrote and is still willing to distribute NETSPY to fuzzball sites to allow them to query the fuzzballs without a telnet session startup. He is also working on enhancing the MIT/CMU PCIP based pinger (MONITOR) to act more rationally on large nets, have a better operator interface, and send alarms on performance as well as reachability criterion. Documentation is available in FTP anonymous/cd NSF/readme from uiucuxc.cso.uiuc.edu. The TCP/IP project for Cray CTSS is nearing completion. The Hyperchannel drivers on both ends (CTSS and BSD 4.3) are talking so the implementation can finally be checked for compatability with other Telnet/FTP/TCP/IP implementations. Illinois Bell has notified us in writing that they will supply us with 56kb digital service in mid May. We are pressuring them to replace at least the CMU line with the new service in early March - and they are making an effort to do so! We have finally received our new loads for our campus network Proteon gateways so the conversion to our class B address should be coming within the next month. (We made the mistake of ordering our 7.2 loads with the Decnet option and the delivery was delayed until we said "We would take it without Decnet"). We have ordered a Vita-Link earthstation for Indiana University with a delivery in March. At that time we will drop off USAN and turn into a hub of our own. This was planned from the beginning of Westine [Page 14] Internet Monthly Report February 1987 the project. Ed Krol, krol@uxc.cso.uiuc.edu John Von Neumann Supercomputing Center, Princeton -------------------------------------------------- General Description: JVNCnet is a high speed packet switched network that connects a central location, the John von Neumann Center (JVNC) at Princeton, NJ with a number of universities (see attached table). At this time all the links are up and the network is fully operational. The purpose of the network is twofold. First, to enable researchers at the universities to access the supercomputer at JVNC. Second, to be part of a larger national network, the NSFnet, which is formed by other Supercomputer centers sponsored by the National Science Fundation. Configuration: The JVNCnet network uses a "star" topology, i.e., all of the communications circuits converge at a single point, JVNC. Two campuses, Harvard and Brown, get access to JVNC through MIT; and Stevens and UMDNJ get access through NJIT. The network's special features are: - access to the JVNC supercomputer. - very high bandwidth mostly T1 (1.544 Mbps), and in a few cases 56kbps. Two satellite links, (see table). - access to large computer networks on the different campuses, plus the NSFnet, and the ARPANET. The LAN is formed by two ethernet cables connecting all our packet switching machines, our front ends, terminal servers, and graphic workstations. The Front-end systems are connected to the supercomputer via a 50Mbps "loosely coupled network". The two ethernet cables as well as all the of point to point links are subnets of the JVNC "class B" number (JVNC-NET, 128.121). The "communication" ethernet is "jvnc-ether" (also 128.121.50.* , where "*" means any number). The other ethernet, "jvnc-admin" is mostly used by terminal servers and two of the front ends, (network number is 128.121.51). The LAN has on line a few SUN workstations for use by high resolution graphics. The APPLETALK network, and the PC-NET network Westine [Page 15] Internet Monthly Report February 1987 are also integrated into the JVNCnet network to provide computer access for the JVNC managers and staff. The links between JVNC and the remote campuses are point to point links. The interface on the VAX's is the DMR11. The T1 lines, use high speed AVANTI multiplexers with CSU/DSU's to interface to the DDS high speed link. AT&T is the major carrier, for both the T1 lines and 56k lines. The packet switching is performed by VAX750's and VAX8600's running ULTRIX. And ip routers on alpha test. At JVNC one VAX8600 ("JVNCA") handles the load of 5 high speed T1 circuits, and two ethernets. One VAX750 ("JVNCB") is connected to 1 high speed T1 circuit two 56kbps circuits and one ethernet. One VAX750 ("JVNCE") connected to 1 high speed T1 circuit and one ethernet. The satellite service connects JVNC with the University of Arizona, and the University of Colorado. This is accomplished by using VITALINK Bridges which "extend" the ethernet all the way to these remote locations. At MIT's gateway to JVNC ("COVENTRY"), the machine handles 3 T1 circuits, one to JVNC, one to HARVARD, and one to BROWN, and also the access to their campus via the ethernet. An ip router connects a 56kbps line from NJIT to our subnet 50. Similar routers at NJIT connect to Stevens Tech and UMDNJ. All the lines are 56kbps for these three New Jersey institutions. JVNC has also a MICRO-PDP-11 ("FUZZBALL") running specialized software that serves as the packet switched node to the NSFnet. Protocols: Like the ARPANET and NSFnet, JVNCnet also uses TCP/IP protocols. They are widely used by the Universities, and they are supported by the ULTRIX, and BSD operating systems, as well as on other OS's by many vendors. Other services include TELNET (virtual terminal connection), FTP (File Transfer Protocol) , SMTP (Simple Mail Transfer Protocol), FINGER, WHOIS, etc... Mail: - SMTP (ARPA mail) One of the machines, "jvnca" is our major mailer for "ARPA" mail. And it serves all the other UNIX based machines on our network. It also serves as the mail "distributor" at JVNC, for mail that is not delivered directly to the front ends. Westine [Page 16] Internet Monthly Report February 1987 Our arpa mail address is "username@jvnc.csc.org". - BITNET One of the front ends, JVNCC has a direct access to BITNET. We get our BITNET access thru Princeton. In order to do that we use 9600 bps extra bandwidth out of our T1 line to Princeton, and we feed that directly into JVNCC. Our BITNET address is "username@jvnc". Currently the bitnet node is "jvncc" so the address at this point is "username@jvncc". Access to Other Networks: NSFnet: The JVNC is one of the six backbone components of the NSFnet. Thus our access to the NSFnet is a direct one, since the "fuzzball", is on our major "communications" ethernet. We have a 56kbps line to the Cornell Fuzzball and another 56kbps line to the CMU Fuzzball. At the present time JVNC is contributing to the NSFnet traffic with approximately 20 different networks being advertized to the NSFnet backbone from JVNC. ARPANET: We will be able to reach ARPANET via 11 out of the 13 member of the JVNC consortium. At this point we are being announced to the ARPANET core machines by three IMP's, one at the CMU Center, one at the University of Pennsylvania, and one at Columbia University. The last two are members of the JVNC Consortium (the line to U. of Pen is a T1 line and the line to Columbia is a 56kbps line). We are also expecting our own IMP to be in placed at the end of March. Routing: Routing at JVNC is performed by the GATED routing program developed at Cornell University. The routes at the main gateways of JVNC, JVNCA, JVNCB and JVNCE is performed by the use of GATED. This program uses RIP to exchange routing information between the JVNC hosts and gateways, as well as with the "remote gateways", the program is also capable of exchange EGP reachability information and it talks and listen to HELLO packets (the NSFnet routing protocol). Most of the "remote gateways" of the Consortium are already running "gated" and their access to JVNC, JVNCnet and the rest of the NSFnet is integrated with their campus networks. A few of them are Westine [Page 17] Internet Monthly Report February 1987 in the process of integrating their campus networks. We are providing all the support that they need so they can have a smoth path to full access to the national networks. Due to this kind of routing mechanism, part of the ARPANET traffic that is currently going from some of our consortium members to other consortium members traverse JVNCnet lines instead of taking the ARPANET route. At the same time there are multiple ways to access JVNC via ARPANET, as well as the NSFnet. This provides a backup service for our "star" configured architecture, allowing maximum reachability from the Consortium sites to JVNC. Dial-In Access: JVNC offers dial-in access via 32 300/1200/2400 MNP error correcting modems. The modems are hard-wired to a terminal server that is connected to subnet 50 (jvnc-ether). From this terminal server, the users can access any of our front ends (JVNCC, JVNCD, and JVNCF). Also we provide access to/from the TYMNET network, for some users that cannot reach JVNC otherwise. Network Monitoring: Network monitoring is performed from JVNCA automatically by sending "echo packets" addressed to the different gateways and hosts of the JVNCnet network. This information is used to compute statistics based on the percentage of returned packets as well as the round- trip delay. The program performs the following functions: (1) collects the statistics of the network, (2) sets an alarm to the operator and to the com person when one of the gateways/hosts/super computer doesn't respond, (3) creates a daily, weekly, monthly report of the status of the network, including information of: a.- the number of times every node goes down, b.- the mean time to recover, c.- the mean time between failures, d.- the maximum time to recover, e.- the percentage of utilization ( or access ) to every gateway, hosts and to the supercomputer (4) records when a point to point link is not performing at its optimum level. (5) graphics color display of the network status in our main area at the JVNC. Westine [Page 18] Internet Monthly Report February 1987 The scheduling of maintanence for the gateways and hosts that make the JVNCnet network, is done with the help of a database like program that the operators run. This information can be accessed by anyone that can get to the JVNCnet network, by using "finger" (the command is: finger schedule@jvnca.csc.org). The network monitoring and scheduling is done 24hs a day seven days a week. The computer operators are trained to attend the network. There are communication people on call 24hs a day seven days a week to maintain the system available. Special efforts are being underway to improve our monitoring system, based on the past experience and the inmediate needs. Special procedures have been written for the Operators so they can monitor the JVNCnet and the NSFnet, and also to initiate troubleshooting, and determination of the problems. Name Servers: JVNC provides the means to resolve the JVNC hosts names for the domain "CSC.ORG", for the entire internet community that utilizes "name servers" (NS). At the same time, JVNC has sponsored the few Consortium members that were not running NS's, to start doing so. At this time, with the exception of one campus, ALL the Consortium members are running NS's. Some of these campuses NS's are backups for other campuses NS's belonging to our Consortium. Due to our natural communication means, the JVNC provides the ideal path for that backup servers to update their data over the JVNCnet network. Currently, the JVNCnet network with its domain "CSC.ORG", is being served by a name server running on JVNCA.CSC.ORG, (also known as JVNCA). This is our primary name server and the authority for the domain CSC.ORG. The network also has backups, or secondary NS's, one in MIT, (BITSY.MIT.EDU), and the other in Penn State (PSUGATE). JVNCA is a backup for Princeton's University domain "PRINCETON.EDU". The Princeton domain is served by a primary name server running at PUSUPER, and with JVNCA as its backup. Note that PSUGATE, as well as PUSUPER are both vax750's that are 1 hop away from JVNC, and form part of the JVNCnet network. The links between them and JVNC is provided by high speed T1 lines (1.544Mbps). Reachability is provided to BITSY.MIT.EDU via JVNCnet, NSFnet and ARPANET, as are PSUGATE, PUSUPER and JVNCA. Westine [Page 19] Internet Monthly Report February 1987 NSFnet Support: As an interested member of the NSFnet we are providing NSFnet external operations as a backup for Cornell and also after hours. This kind of support is part of the cooperation that exist between the NSFnet sites and it is the only way to keep the network operational. Westine [Page 20] Internet Monthly Report February 1987 THE JVNCnet NETWORK =================== 128.121.51 128.121.50 ethernet ethernet || || T1 || || |--- RUTGERS || || | T1 || || |--- U.OF.PEN || || | T1 T1 || ||------ JVNCA------ MIT ------- HARVARD ||------------------------| | T1 || ||* |--- IAS || || | T1 || || |--- PRINCETON || || || || 56k || || |--- U.OF.ROCHESTER || || | 56k || ||------ JVNCB------ COLUMBIA || || | T1 || || |--- NYU || || || || T1 || ||------ JVNCE------ PENN.STATE || || || || 56k || ||------ BRIDGE----- SATELLITE------- U.OF.ARIZONA || || | 56k || || |---- U.OF.COLORADO || || || || 56k 56k || ||------ UB-ROUTER---- NJIT----- STEVENS INSTITUTE || || | 56k || || |------- UMDNJ || || || || 56k || ||------ FUZZBALL------- CORNELL || || (NSFnet) | 56k || || |----- CMU || || || ||------ dial-in terminal servers ( and TYMNET access ) || || || ||------ JVNCC----------- || || | |==== CYBER205.1 || |CI-bus | || | |loosely_coupled_network ||---------------------- JVNCD----------- || | || |==== CYBER205.2 || | Westine [Page 21] Internet Monthly Report February 1987 ||---------------------- JVNCF----------- || ||---------------------- graphics machines || ||---------------------- staff terminal servers || note: (*) JVNCA is connected to both ethernets, and 5 T1 lines JVNCA: vax8600 running ULTRIX1.2 JVNCB: vax750 running ULTRIX1.2 JVNCE: vax750 running ULTRIX1.2 JVNCC: vax8600 running VMS JVNCD: vax8600 running VMS JVNCF: vax8600 running ULTRIX1.2 JVNC Consortium =============== node access to Arpanet Type of line ---- ----------------- ------------ U. of Pennsylvania yes T1 MIT yes T1 Harvard U. yes T1 Brown U. no T1 * IAS no T1 Princeton U. no T1 * Rutgers U. yes T1 Penn State no T1 * NYU yes T1 NJIT no 56kbps * Stevens no 56kbps * UMDNJ no 56kbps * Columbia U. yes 56kbps U. of Rochester yes 56kbps U. of Arizona no 56kpbs satellite U. of Colorado no 56kbps satellite where T1: 1.544 Mbps *: they get access via JVNC Sergio Heker, heker@jvnca.csc.org Westine [Page 22] Internet Monthly Report February 1987 National Center for Atmospheric Research ----------------------------------------- (and University SAtellite Network project) ------------------------------------------ All USAN sites except Woods Hole are up and running. The cutover from 128.117 to 128.116 for the USAN sites is scheduled for the week of March 9, which means either Mon or Fri of that week because of the USAN meeting. The Proteon p4200 gateway between 128.116 and 117 is up and has been tested. There is also a CISCO gateway to the Univ. of Colorado. The essential configuration will be as shown: Satellite to JVNC | fuzzball USAN sites CISCO to U Colo TRANSLAN to 30th st | | | | ----------------------------------------------------------- 128.116 | PROTEON | ------------------------------------------------------------ 128.117 Various NCAR divisions connected by Bridges Severe snowstorm eve. of 18 Feb and morning 19 Feb in Boulder affected USAN performance, causing periods of total outage. Don Morris, morris@scdsw1.ucar.edu Pittburgh Supercomputer Consortium ---------------------------------- Despite line problems, the PSC Fuzzball has been running well. The machine will be rebooted with the latest version of software on February 20. We are scheduling retermination of the NSFnet leased lines to our local offices where we will have quicker and easier access to both the Fuzzball and PSC-Gateway. We are trying to schedule this for early March. Shortly after this we will transfer the PSC machines from the CMU network (128.2) to our own net (128.182). We have been testing Proteon P4200 Gateways with 56Kbps and T1 interfaces for use with our PSCnet. We hope to order the lines for PSCnet shortly. The psc-gw seems to be much better shape in recent weeks. BBN has adjusted the flow control parameters on our PSN and if they continue to work well here, then Darpa will request that this fix Westine [Page 23] Internet Monthly Report February 1987 be applied at other X.25 sites. It appears that the design of the 5250 causes its performance to degrade faster than the 1822 interfaces in the face of heavy load at the PSN. ACC plans to have new ROMs to address this problem by early April. Jim Ellis, ellis@morgul.psc.edu San Diego Supercomputing Center ------------------------------- The San Diego Supercomputer Center (SDSC) is one of five such centers initiated by the National Science Foundation (NSF) in 1985. The primary research tool at the center is a CRAY X-MP/48 supercomputer. An SCS-40 minisupercomputer has just come online for research use; both run the interactive Cray Time Sharing System (CTSS) operating system developed at the Lawrence Livermore National Laboratory. In addition to these two systems, DEC and IBM equipment play a major role for communications and file systems. Researchers from more than 100 institutions scattered from Maryland to Hawaii are currently allocated time on one or more of the systems. The San Diego Center has attempted to provide as many types of communication links as its researchers find most suitable. To that end, it has joined several existing networks. A brief summary is set out below. A much more detailed picture will be in next month's report. SDSC is a backbone node of NSFnet. The center has NSFnet direct links to the supercomputer center at the University of Illinois and NSF's National Center for Atmospheric Research at Boulder; both at 56 kbits/s. A line will soon be in place to NASA's center at AMES providing a link between NSFnet and the NASA Science Internet. Connection via telnet to the SDSC Cray has been available for the past several months; FTP service will be inaugurated in late March 1987. During 1987, SDSC plans to extend IP services out from the center to the members of its consortium. Members of the SDSC consortium and some industrial users belong to SDSCnet, a network modeled on supercomputer network of the National Magnetic Fusion Energy Computer Center (MFEnet). SDSCnet nodes (VAX/VMS systems) access the supercomputers primarily via 56 kbit/s terrestrial or satellite links. The nodes themselves are often on university LANs and/or regional nets, with the usual benefits such provide. As SDSCnet and MFEnet are connected, SDSCnet users can access MFEnet sites, including the supercomputers at Livermore and at Florida State University. Conversely, MFEnet users can access the SDSC systems. Since several sites with allocations of time at SDSC use DECnet as their primary network, SDSC has implemented software to support their access to the center's systems. Additionally, the center has joined NASA's SPAN to enlarge the number of users with a connection Westine [Page 24] Internet Monthly Report February 1987 to the center and to provide researchers at the center with access to SPAN's resources. Due to the interconnections of SPAN and DOE's HEP, users of that network can also connect to SDSC. Finally, SDSC is connected to BITnet and TYMnet. Dial-in telephone lines are also available. Paul Love, loveep@sdsc.arpa NSFnet regional, affiliated or consortia networks ================================================= Bay Area Regional Research network (BARRNET) -------------------------------------------- Two of five T-1 links for BARRNET backbone installed by telco (PacBell), third nearing completion, line four will be on UC owned microwave microwave from UCSF to UC Berkeley, which is available now. Fifth link will be third party supplied microwave from UC Berkeley to UC Davis, installation scheduled for March. Lawrence Livermore Labs and Stanford Linear Accelerator center have expressed interest in links into BARRNET. Univ of Nevada at Reno also expressed interest earlier. Proteon routers with Ethernet and full T-1 interfaces (SBE boards) have been bench tested at Stanford with Avanti Accupak 1.5 data formatters (DSX-1 interface w/o CSUs). Appears that with the CSU (Avanti's internal "ISU") the effective thruput will be limited to 1.344Mb/s by the Avanti because it uses straight bit-stuffing to meet the telco connect requirements. As a result, we have started testing Verilink units which use an algorithm for maintaining the telco T-1 bit density requirements that appears to avoid the loss of the 192kbps. Will also test Phoenix microsystems units which purport the same in their reformatter/CSUs and have a better physical configuration (8 packs, formatter or CSU to a 19" rack) and a built-in BERT in CSU. Have not attempted any thruput testing yet. First link to be activated will be NASA Ames to Stanford, second planned to be UCB to UCSF and UCB to Stanford. First one should be on air next week, second and third by first week of March. Bill Yundt, gd.why@forsythe.stanford.edu Illinois network ----------------- (refer to the UIUC backbone report) Westine [Page 25] Internet Monthly Report February 1987 John von Neumann Supercomputing Center network (JVNCNET) --------------------------------------------------------- (refer to the JVNC backbone report) Merit Computer Network ----------------------- The Merit Computer Network began in the early seventies as an interuniversity network for the state of Michigan. Merit received NSF funding for its initial implementation, but is now funded by the affiliated Universities. However, Merit recently received some NSF funding to implement the TCP/IP protocol suite so that the Merit Universities can access the NSFnet. At the time of this writing Merit has 189 dedicated packet switching nodes, of which 26 are Primary Communications Processors (PCP) and the other 163 are Secondary Communications Processors (SCP) . Twentysix X.25 links are attached to the network, as well as 6168 asynchronous ports. Merit also connects to several mainframes via channel interfaces. One of Merit's SCPs is in Washington DC, connected via a dedicated link, and another as an experimental node at NCAR, connected via the USAN satellite network. Merit is connected via two links to the Telenet public data network and via a single link to ADP Autonet. As part of the NSF funded IP development effort Merit packet switching nodes, which are running a locally written data communications operating system, are now able to switch IP datagrams throughout the network. We anticipate this to work in conjunction with channel attached mainframes soon. At this point of time we can attached Ethernets to the Merit nodes as well as asynchronously attached PCs (running, e.g., the asynchronous version of the MIT PC/IP code). We anticipate a SLIP implementation for the asynchronous lines to be working soon. A detailed description of the Merit Computer Network can be obtained by sending a message to Merit_Computer_Network@UM.CC.UMICH.EDU which includes a USMail address. This electronic mail address can also be used to reach user consultants in case of any questions. An alternative way to contact the Merit Computer Network is by calling (313) 764-9423. Hans-Werner Braun, hwb@mcr.umich.edu Westine [Page 26] Internet Monthly Report February 1987 NORTHWESTnet ------------ NorthWestNet is not active yet; negotiations with NSF are still in progress. Hellmut Golde, golde@cs.washington.edu NYSERnet -------- As of 20 Feb 1987, NYSERNet has the following topology with 56kbit links. T1's are also in place for the topology of below but have not been cutover to the switching gear. Rochester--------Cornell---------RPI | | | | | | Columbia NYU Internal routing is via RIP, the switching gear is Proteon4200's. NSFNet+Regional routing goes through Cornell's Fuzzball. Generalized INTERNET routing is site specific. Then NYSERNet Network Information Center opened on the 6th of February information can be obtained throught nic@nic.nyser.net or 518-266-NNIC. Marty Schoffstall, schoff@nic.nyser.net MIDnet ------ Gateway-routers have been selected and ordered (Proteon). Our biggest problem has been getting 56 kbs telephone lines into all locations. After innumerable meetings with telephone companies we are nearing resolution on that issue. In November MIDnet held its second regional meeting in Kansas City. In addition to business items there were several presentations and tutorials. A one day visit to NCSA by MIDnet users and user services staff is scheduled for March 30 and a third regional meeting for sometime in April. We hope to have some line segments operational in late April and the whole network up during the summer. Doug Gale, doug@unlcdc3.bitnet SDSCnet ------- (refer to SDSC backbone report) Westine [Page 27] Internet Monthly Report February 1987 SESQUInet --------- We are currently funded by NSF and are shopping for phone lines and for gateways. In the gateway market, we are considering Proteon and Cisco. Initial sites will include: Baylor College of Medicine (Texas Medical Center, Houston) Houston Area Research Center (north of Houston) Rice University (Houston) Texas A&M University (College Station) Texas Southern University (Houston) University of Houston (Houston) Future growth will include the University of Texas, Baylor University, the NASA Johnson Spacecraft Center, and other campuses of the Texas Medical Center." Guy Almes, almes@rice.edu SURAnet ------- SURAnet Phase I is composed of 15 institutions in 12 states and the District of Columbia. Geographically, SURAnet ranges from Delaware, in the North, to Florida,in the South, and goes as far West as Louisiana. Topologically, SURAnet is composed of a major loop plus a number of stubs. The nodes in the loop are the University of Maryland, George Washington University (in the District of Columbia), Virginia Tech, the Triangle Universities (in North Carolina), Clemson University (in South Carolina), Georgia Tech, the University of Alabama at Birmingham, the University of Tennessee, and the University of Kentucky. The stubs are the University of Delaware,the National Science Foundation (in the District of Columbia), Florida State University, Louisiana State University, the University of West Virginia and the University of Georgia. LSU, FSU, the University of West Virginia, and the University of Georgia are the four sites which remain to be connected to the Network. All Suranet sites reachable by AT&T 56 kb lines were interconnected last Fall. Westine [Page 28] Internet Monthly Report February 1987 Proteon routers for these sites were configured and tested at the University of Maryland. They were shipped to the sites by late December. They were talking to each other,for the most part, by the middle of January. The reliability of version 7.1 of the Proteon software was very poor. We decided that we had to move to version 7.2 with all possible speed. Prior to moving to 7.2 it was decided to hold a meeting of SURAnet site technical representatives, invited experts and guests. This was held at the University of Maryland on Feb. 13 and 14 and had over 70 participants. Version 7.2 is presently being installed. We anticipate that 7.2 will be more robust than 7.1. We will be in a position to report on this shortly. As I tried to convey above, all connected sites have been operational briefly, perhaps not all at the same time, under version 7.1 of the Proteon software. However, 7.1 had very poor reliability. We decided to move to 7.2. At this instant in time only Maryland and George Washington are running 7.2. It appears to be much more robust. We are trying to move to 7.2 at all connected sites very quickly. Ask me again next week. I am optimistic! Jack Hahn, hahn@umdc.bitnet WESTnet ------- (no report received as no email address was available at the time of contacting individual sites for this month's report) Westine [Page 29] Internet Monthly Report February 1987 TASK FORCE REPORTS ------------------ Note on Task Force Mailing Lists For each Task Force there are two lists: taskforcename-TF@ISI.EDU is the task force itself, and taskforcename-INTEREST@ISI.EDU is other interested people. Anything that goes to the -INTEREST list is copied automatically to the -TF list (the -TF is a subset of the -INTEREST). The -TF list could be use to discuss TF business (meetings, minutes, schedules, plans, details). The -INTEREST list could be for free discussion of concepts and technical issues where you want ideas from anybody interested. Any additions or deletions to either list will be handled by Ann Westine (Westine@ISI.EDU). APPLICATIONS -- USER INTERFACE The task force is still in the midst of reorganization. Current members include Steve Casner (ISI), Terry Crowley (BBN), Jose Garcia Luna Aceves (SRI), Sunil Sarin (CCA), and Joseph Sventek (ANSA Project). Prospective members should contact me via e-mail (lantz@score.stanford.edu) as soon as possible; a meeting is being planned for early spring. Keith Lantz AUTONOMOUS NETWORKS A meeting is planned for 20-21 March in Palo Alto, California, and the task force mailing list is now in operation (AUTONETS-TF@C.ISI.EDU). Deborah Estrin Westine [Page 30] Internet Monthly Report February 1987 END-TO-END SERVICES No progress to report this month. Bob Braden INTERNET ARCHITECTURE INARC related issues were discussed on several lists, including those related to the proposed Dissimilar Gateway Protocol (ineng-tf and nsfnet-routing) and host/gateway monitoring (gwmon). Dave Mills INTERNET ENGINEERING No report received. PRIVACY Following circulation of a final internal draft to the task force membership and processing of resulting comments, the Privacy Task Force RFC, "Privacy Enhancement for Internet Electronic Mail: Part I: Message Encipherment and Authentication Procedures", was submitted for distribution to the Internet community. It is now available from SRI-NIC as RFC989. The next task force meeting was scheduled for 31 March-1 April at RIACS. John Linn ROBUSTNESS AND SURVIVABILITY No report received. SCIENTIFIC COMPUTING No report received. Westine [Page 31] Internet Monthly Report February 1987 SECURITY No report received. TACTICAL INTERNET No report received. TESTING AND EVALUATION No report received. Westine [Page 32] Internet Monthly Report February 1987 APPENDIX ======== Network Routing Deamon for 4.3bsd (and like) Systems ---------------------------------------------------- Gated is a routing daemon that handles multiple routing protocols and is meant as a replacement for the Unix routed and egpup deamons, as well as any routing daemon that speaks the HELLO routing protocol. Gated currently handles the RIP, EGP, and HELLO routing protocols. The gated process can be configured to perform all 3 routing protocols or any combination of the three. The configuration for gated is by default stored in the file "/etc/gated.conf", but can be changed at compile time. Gated can be invoked with a number of debug flags and/or a log file. When debug flags are used, the process does not fork and detach itself from the controlling terminal. If there is no log file specified, all debug output is sent to the controlling terminal; otherwise, the debugging output is sent to the log file specified. The interaction of the three routing protocols are as follows: - Any route learned via an interior routing protocol (HELLO/RIP) takes precedence over the externally (EGP) learned route. More specifically, if a route is learned of via RIP and there is already an external route learned of via EGP, the EGP route is deleted and the RIP route is installed in its place. In effect, the exterior (EGP) route is used as a backup route should the interior (RIP/HELLO) route become unavailable. - Routing information learned via an interior routing protocol can be selectively passed on to the exterior routing protocol. Specifically, RIP and HELLO information can be passed on to EGP for further propagation. - Routing information learned from an exterior routing protocol is not passed on to the interior routing protocol. Specifically information learned via EGP is not passed on to RIP or HELLO for further progagation. With the emergence of the NSFnet and many regional networks like NYSERnet and SURAnet, the paths to different sites are not exclusively over the ARPAnet any more. There are many back doors and redundancies in the Internet. The gated process allows a site to take full advantage of the redundancy Westine [Page 33] Internet Monthly Report February 1987 of the Internet dynamically. It allows sites to use a faster regional network link and still be able to rely on a slower ARPAnet link as a backup. Gated currently runs under 4.3BSD UNIX, DEC Ultrix V1.2, GOULD UTX/32 V1.2, V2.0. The code was and is still being developed at Cornell University and the University of Maryland. For more information on the protocol specifics, consult the following RFC's: RFC827 - EGP formal specifications. RFC891 - HELLO formal specifications. RFC911 - EGP under UNIX 4.2BSD routed(8) - BSD UNIX manual page describing the RIP process. For more information on gated contact: Mark Fedor Cornell Theory Center 265 Olin Hall, Cornell U. Ithaca, NY 14850 fedor@devvax.tn.cornell.edu Mark Fedor, fedor@devvax.tn.cornell.edu Westine [Page 34]