[IMR] IMR87-06.TXT Westine [Page 1] ~ JUNE 1987 INTERNET MONTHLY REPORTS ------------------------ The purpose of these reports is to communicate to the Internet Research Group the accomplishments, milestones reached, or problems discovered by the participating organizations. This report is for research use only, and is not for public distribution. Each organization is expected to submit a 1/2 page report on the first business day of the month describing the previous month's activities. These reports should be submitted via network mail to Ann Westine (Westine@ISI.EDU) or Karen Roubicek (Roubicek@SH.CS.NET). BBN LABORATORIES AND BBN COMMUNICATIONS CORPORATION --------------------------------------------------- INTERNET GATEWAY RESEARCH AND DEVELOPMENT Mike Brescia and Bob Hinden attended the June Satnet meetings in Norway. We completed the IP reassembly code for the Butterfly gateways and installed it in the six Satnet gateways. This code allows EGP updates to be fragmented and reassembled over the Satnet, allowing full exterior routing despite the 256 byte MTU. This will also allow the Butterfly Gateways on Arpanet and Milnet to handle EGP updates larger than 1006 bytes from the LSI-11 EGP servers, when Westine [Page 1] Internet Monthly Report June 1987 the number of networks and gateways makes this necessary. We started work on a congestion control scheme for internet gateways, which is currently in the research and definition stage. WIDEBAND NETWORK BSAT software Release 5.3 was distributed to the Wideband Network sites. This release modifies some aspects of internal processing flow within the BSAT's satellite channel interface module. The effects of the modifications include increased priority for stream messages contending with datagram messages for BSAT processing capacity and greater independence between the channel interface module and the BSAT host modules. BSMI boards were installed at the DCEC, RADC, and Ft. Monmouth Wideband Network sites. Every network site is now using BSMI hardware for satellite channel operations. Personnel from BBN and ISI travelled to M/A-COM Linkabit in San Diego to participate in a discussion about potential ESI development work. A number of ESI performance and reliability improvements and operational problems were discussed. A list of action items was drawn up which included ESI repair and performance-related tasks. SATNET The SATNET remained stable through the month of June. No new SIMP hardware or software problems occurred. We are still waiting on the acceptance of the spare modems from Linkabit before shipping spare parts to the sites. Measurement Taskforce work done during the month included: verifying that the bottleneck between the UCL gateway and Goonhilly SIMP is no longer present, monitoring SATNET traffic with a finer granularity, retesting the "multiple of 8" bug patch, and remeasuring the overall throughput of the SATNET. Claudio Topolcic, Karen Seo, John Leddy, Mike Brescia, and Bob Hinden attended the SATNET/Infrastructure and SATNET Measurement Taskforce meetings at NTA-RE, Oslo, Norway on 6/23-6/25. Steve Blumenthal@VAX.BBN.COM Westine [Page 2] Internet Monthly Report June 1987 ISI --- Internet Concepts Project Jon Postel and Bob Braden completed RFC 1009, "Requirements for Internet Gateways." Six RFCs were published: RFC 1007: McCoy, W., "Military Supplement to the ISO Transport Protocol". RFC 1008: McCoy, W., "Implementation Guide for the ISO Transport Protocol". RFC 1009: Braden, B., and J. Postel, "Requirements for Internet Gateways." RFC 1012: Reynolds, J.K., and J. Postel, "Bibliography of Request for Comments 1 through 999. RFC 1013: Scheifler, R.W., "X Window System Protocol, Version II Alpha Update - April 1987." RFC 1014: Sun Microsystems, Inc., XDR: External Data Representation Standard Ann Westine Greg Finn is currently examining how resistent network protocols are when under attack. If we assume software error, hardware error, or malicious attack at a single network router, then what damage can occur to the network as a result? ARPANET SPF, Internet GGP, hierarchic, and Cartesian protocols are being examined to determine how attack resistent they are and how they might be augmented to improve resistence. Annette DeSchon made a number of updates to the XDE Internet protocol software which runs on the Xerox Dandelion workstation. Some of the changes are as follows: - The ISI Domain Name Resolver has been merged with the latest 1.1 protocol versions. - In FTP, the PORT command has been implemented. This improves FTP's performance quit a bit, solving the problem in which an initial transfer was followed by much slower subsequent transfers. In addition, illegal FTP responses, which are occasionally generated by a 4.3 FTP server in Westine [Page 3] Internet Monthly Report June 1987 response to a LIST request, are now displayed. Several bugs which have been causing the tool to hang have also been fixed. - In TCP, the number of pages used for buffering have been increased. This allows TCP to work better with the buffer size set to 1500. - In the INSpyTool, Susie Armstrong's latest improvements to the NSSpyTool have been merged with ISI's version. In addition, the INSpyTool now displays the headers on broadcast packets. Several minor display bugs and filtering bugs were fixed as well. These updates have been distributed to various universities who participate in the Xerox University Grant Program. Greg Finn & Annette DeSchon Multimedia Conferencing Project Work has begun on extending the packet video conferencing system from a point-to-point connection between two sites to be a multipoint connection among two or more sites. Initially a fake video source will be used for the third site since only two copies of the video hardware currently exist, but this will still allow a complete test of the protocols. Brian Hung has modified his program for the IBM-PC AT to scan, reduce and display the documents at more than twice the speed of the old version; and in order to see precisely the window a user has defined, the program now draws a border around the window which is to be clipped. Currently the program is not able to generate a message containing an entire bitmap. Joyce Reynolds continues to exchange Diamond Multimedia Mail between ISI and outside sites. Steve Casner attended the first meeting of the combined DSAB User Interface Task Force and IAB Applications Task Force, at Stanford University, 24-26 June 1987. Steve Casner attended the Wideband Network Improvements meeting in San Diego, 19 June, 1987. Steve Casner, Brian Hung, and Joyce Reynolds NSFNET Project Annette DeSchon continued development of a background file transfer program, which will allow a user to submit a request for a reliable file transfer to take place in the future. A preliminary version of this program, which uses the third Westine [Page 4] Internet Monthly Report June 1987 party or "server-server" model described in RFC 765, is currently being tested on a Sun workstation. Bob Braden did a major cleanup on the SUN interface to NETBLT, and began work on embedding NETBLT into the BSD FTP User ("ftp") and FTP Server ("ftpd") programs. Completion of this work requires updates to the NETBLT user interface, which Mark Lambert (MIT LCS) will shortly finish. He also worked on a draft RFC concerning selective acknowledgments in TCP. Bob Braden and Annette DeSchon Supercomputer and Workstation Communication Project Alan Katz finished an evaluation of equations in Interleaf's UPS system. Also, I began a study for an "Intelligent Communication Facility" which would allow users to connect the output of a program running on one machine to the input of a program on a different machine. This facility would be very much like using pipes together with "rsh" in the Unix world, except that the pipes will be typed (contain a stream of data of a particular type) and the system will not be particular to Unix. The data types in the pipes will probably be X409. It is intended that a user could "patch together" various programs running on various machines without having to modify these programs (to the extent this is possible). Alan Katz MIT-LCS ------- No report received. Lixia Zhang (Lixia@XX.LCS.MIT.EDU) MITRE Corporation ----------------- The objective of the MITRE Internet Engineering program is twofold: 1) to address internet level performance issues for the DoD internet and 2) to address the interoperability between the DoD and OSI protocol suites as support for a planned transition from DoD to OSI protocols. To support these objectives, the following work was accomplished this month: Westine [Page 5] Internet Monthly Report June 1987 1. Internet Performance Measurements. The baseline performance experiments with different interfaces, and packet size and delay have been presented at previous Internet Engineering Task Force meetings. We now are extending the measurements into the internet area over 1 and 2 gateway hops. 2. Congestion Control Experimentation The second phase of the local net and stub gateway simulation model is complete. The additions to the model are as follows: a. Autoregression techniques to estimate round trip times in TCP connections and least square estimation of autoregression parameters. b. Van Jacobson's slow start method for TCP connections. c. John Nagle's TCP window constriction on Source Quench receipt. d. Soft quench mechanism at the gateway. A draft user manual for the simulation model will be delivered to the sponsor in July. 3. FTP/FTAM Application Bridge The FTAM to FTP connections have been tested and demonstrated; the FTP to FTAM software has been coded and is being tested. This bridge uses a combination of the Northrop ISODE and SUNLINK/OSI software, and SUN DoD protocols to form a dual protocol host to support the bridge. A Design and Implementation Plan is in peer review and will be delivered to the sponsor in July. 4. VTP We continue to develop a Basic VTP implementation and will deliver a Design and Implementation Plan to the sponsor in July. We also are very active in the standards developments of VTP and will be contributing to the redesign of the state table changes in the protocol specifications defined at the ISO TC97 SC 21 meeting in Tokyo. 5. User Authentication and Access Control The model is in the final stages of development and a design plan for the model has been distributed throughout the government community for comment. We are participating in the authentication and access control working groups of the SDNS program and ANSC X3T5.1. Westine [Page 6] Internet Monthly Report June 1987 6. Landmark Routing The Landmark Hierarchy Model document will be delivered to the sponsor in July. Work continues on developing algorithms to test the model and to define criteria for a testbed environment. 7. Name Service We are working with the NIC in reference to developing a body of RFC documentation for Name Domain use in the MILNET. We also are participating in the NBS working group for Directory Service. Ann Whitaker NTA & NDRE ---------- No report received. ROCKWELL -------- Jim Stevens released a SURAN technical note about route tracing that includes ideas applicable to IP options or ICMP. The document is SRNTN 49, "Journey and Route Tracing", and is available to government agencies and their contractors from the Defense Technical Information Center (DTIC), Cameron Center, Alexandria, Virginia 22314. John Jubin (Jubin@A.ISI.EDU) SRI --- 1. A paper entitled "Research in Information Exchange Representations and Layered Software Architectures for C3 Systems" was presented at the JDLC3 conference at Ft. McNair in Washington, DC on June 18. The paper discusses the requirements for better software architectures and design principles that are necessary for complex, evolving C3 systems in order to maximally utilize multi-vendor hardware and software. 2. At the same time, The Experimental Multimedia Collaboration Environment (EMCE) and SITMAP software was shown at the AFCEA conference in WDC running on Sun 3/260 hardware. A three workstation configuration connected by an Ethernet was used to Westine [Page 7] Internet Monthly Report June 1987 demonstrate the two software suites developed under contract for NOSC and CECOMs. These systems are illustrations of the principles and difficulties encountered in implementing the techniques defined in the paper. Specific capabilities shown are involved in the situation assessment and monitoring phases of the C3I process, and include showing unit locations with icon overlays on high precision digitized color maps; accessing additional information by graphical references; real-time voice, pointer and program command exchange over the Ethernet, and a module set of Interactive Graphical Tools based on the Structured Display File representation developed by Keith Lantz at Stanford. Specifically, the low delay requirements of realtime multimedia interactions in such layered highly modular software systems was illustrated and discussed. Jim Mathis (Mathis@KL.SRI.COM) UCL --- Jon Crowcroft and Peter Kirstein attended the SATNET/Infrastructure and ICB meetings in Oslo. UCL in conjunction with NTA and BBN has been making a series of measurements of IP and TCP performance over SATNET and the ARPANET. Preliminary findings confirm earlier work by Peter Lloyd are that current TCPs do not cope well with high delay networks where packets are actually dropped in any significant numbers. A simple example of this is a pure SATNET route which can support approximately 63kbps IP throughput, but only 10kbps TCP throughput. The TCP throughput falls to less than 4kbps when adding an ARPANET hop to the measurement path. The ISODE FTAM is now running over two different X.25 networks as well as on TCP/IP. Steve Kille has made a preliminary specification of the PP (X.400) mail system interfaces in ASN.1. The Admiral megastream network is being used for fast (500kbps) access to the Crays at the University of London Computer Centre (ULCC). An RPC to JTMP (JNT version of JTM) relay runs on a microvax at ULCC and forwards calls to the Cray/Amdahl cluster. John Crowcroft (jon@CS.UCL.AC.UK) Westine [Page 8] Internet Monthly Report June 1987 UNIVERSITY OF DELAWARE ---------------------- 1. Development continues on the Dissimilar Gateway Protocol (DGP). A developmental model has been worked out which provides type-of-service routing and selectable routing constraints in a fixed hierarchy. This model does not require slope/intercept data from all gateway interfaces, but does use source routing (where required) and requires DGP neighbors from different systems to reside on the same net. A technical note and briefing are now in preparation. 2. The new fuzzware distribution for the NSFNET Backbone installed early in the month performed very well; however, observations during its first two weeks of operation suggested certain improvements which could further improve performance under the frequent conditions of heavy stress. Subsequently the preemption policy was changed to (a) insure an input buffer is always available, (b) provide equal access to buffer resources for all input lines and (c) provide uniform preemption rates over all output lines. The new policy was installed near the middle of the month, with result that the drop rate for the network as a whole fell dramatically while the traffic carried increased substantially. 3. After two weeks of observation with the new preemption policy, it became clear that (a) the internal datagram-aging (TTL) mechanism in the fuzzballs was not always working properly and (b) the preemption policy should be further granularized to provide equal access to buffer resources for all original senders (IP source address). Both of these things were fixed and tested in the backlots at U Delaware and Linkabit and are now ready for distribution. The latest changes amount to a full-scale test of the "fair queueing" mechanisms suggested by John nagle, Lixia Zhang and others at recent INENG task-force meetings. 4. Initial testing of the newest preemption policy revealed fascinating insights into the detail mechanisms of typical path-overload events in the Internet. As expected, only a few hosts out of many (identified on request) are responsible for most of these events and the events themselves are brief, but intense. An event usually involves many tinygrams from the same hosts, so a new buffer request can result in many of these being preempted. A preemption is very likely to be followed immediately by another for the same host. There are many lessons for the source-quench mechanism here which have yet to be digested and understood. 5. The network-time synchronization system, which also serves as Westine [Page 9] Internet Monthly Report June 1987 an advance warning detector for routing instabilities or corequakes, has been sounding alarms of increasing urgency over the last two weeks. During testing of the new preemption policy it became clear that serious trouble is brewing in the core gateway system with routes oscillating wildly between gateways and many instances of broken connectivity. This was confirmed by direct observation from the INOC of the routing data base, portions of which were captured and distributed in a message to the tcp-ip list. At the end of the month the cause of these problems had not yet been identified. 6. I completed and distributed to INENG and NSF lists, a draft of a proposed scheme for interworking between routing algorithms of the same or different autonomous systems. Within a system the scheme requires both the metrics and the transformations between them to satisfy certain conditions, which the most common ones (RIPspeak and Hellospeak) alread do. Between systems the scheme relies on pre-engineered spanning trees embedded in an otherwise unrestricted topology, together with designated primary and fallback gateways for each net in each system. Dave Mills (Mills@UDEL.EDU) Westine [Page 10] Internet Monthly Report June 1987 NSF NETWORKING -------------- UCAR/BBN LABS NSF NETWORK SERVICE CENTER (NNSC) A twelve-month on-line calendar of meetings and conferences of interest to the Internet community is now available on the CSNET Info Server. For information on how to use the Info Server, send a message to with the following text in the body of your message: Request: calendar Topic: help The first issue of the NSF Network News has been completed and will be distributed in mid-July. Craig Partridge, Bill Curtis, and Karen Roubicek visited JVNC to meet with Sergio Heker and User Services Manager Vercell Vance. Draft RFCs for the High-Level Entity Management System (HEMS) were finished, and sent to the RFC editor but release has been postponed for a few weeks to make minor changes requested by some vendors. An open workshop on network management has been scheduled for the first day of the Internet Engineering Task Force meeting at the end of July. By Karen Roubicek (roubicek@nnsc.nsf.net) NSFNET BACKBONE STITES CORNELL UNIVERSITY THEORY CENTER Backbone Operations: 42,560,072 packets were delivered to the backbone in May, up 37% from April. New software has been installed on all backbone fuzzballs. All indications are that things are more stable since the upgrade. Coordination and Interoperability Issues: Progress on "gated" is coming along slowly, but steadily. A number of sites have a Beta-release that is being tested at the current time. Almost all of the features mentioned previously in this forum have been added. "Gated" will be made available to the general public after a little more polishing and testing is done. By Craig Callinan (craig@tcgould.tn.cornell.edu) UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Changes were made to the Cray CTSS IP/TCP to attempt to make it Westine [Page 11] Internet Monthly Report June 1987 more neighborly. It now responds to ICMP Source Quenches using the algorithm of Jon Postel's upcomming RFC. It uses Mills' RFC889 nonlinear SRTT calculation. Also, due to problems with long delays associated with file transfers to the SDSC Cray, its maximum retransmission interval can be 4 minutes. Mods to CTSS to allow arbitrary sized interprocess communications have been installed. When TCP and the user call library are converted to use the new features a major bottleneck in the communication path will be widened. We are passing around on NSFNET a draft novices guide to Internet facilities folklore, and hints. The first round of comments is currently being integrated into the document. It should be ready for consumption during July. By Ed Krol (krol@uxc.cso.uiuc.edu) JOHN VON NEUMANN NATIONAL SUPERCOMPUTER CENTER prepared: Tue Jun 30 13:43:35 1987 Monthly Report node name meanttr maxttr meantbf sched_down avail perform (min) (min) (min) time (%) (%) (%) --------- ------- ------ ------- ---------- ------ -------- jvncb 107 97 21328 2 99 99 jvncc 37 97 6755 7 99 95 jvncd 35 100 5794 7 99 96 jvnce 26 50 2270 0 98 98 jvncf 24 119 3261 2 99 97 arizona 404 724 10683 0 97 96 brown 68 75 4665 14 99 98 colorado 0 724 21580 0 98 98 columbia 25 215 684 0 96 93 harvard 63 70 8593 1 99 98 ias 10 10 14636 0 99 99 mit 63 70 8593 1 99 98 njit 1729 6519 6169 0 84 83 nyu 21 95 3972 0 99 99 penn_state 52 95 43833 0 99 99 princeton 10 10 14636 0 99 99 rutgers 60 130 5439 0 99 98 rochester 27 95 10948 0 99 99 u-of-penn 40 40 7281 0 99 99 stevens 424 6588 2040 0 83 83 umdnj 701 6510 3355 0 83 83 Total test time (min): 43938 Westine [Page 12] Internet Monthly Report June 1987 Note that the monitoring is done by one of the "routers" therefore when the monitoring "router" is down, we have no information of whether the rest of the network is operational or not. Also, the tests are performed on the 128.121 address of the routers, and with the knowledge that our class B address is subnetted. Thus, when i.e. the tests gives "columbia" "down", it means to us that: or the gateways are down (physically or the networking software), or the path to columbia is down, and the path can only be one (the one determined by the subnetted number that the "columbia" gateway uses). The meaning of the terms are: nodename: the gateway or host on JVNC-NET network (128.121) meanttr: in minutes, the mean time to recover from the "down" state to the "up" state. Where "down" state is when the result of sending "icmp-echo" packets is no packets return, and the "up" state is when we receive packets back. Each test is performed 10 times, every 10 minutes, and averaged each time. maxttr: in minutes, the maximum time to recover, from the "down" state to the "up" state (see above). meantbf: in minutes, the mean time between failures. sched_down time: in percent, is the time the gateways/hosts were "scheduled" to be down, respect to the total time of test. avail: in percent, is the time for which the gateways were available ("up" state) respect to the total time of test (minus the scheduled down time). perform: in percent, is the figure of merit that considers the number of packets lost and the available time. Gateway Availabilty Note that the NRAC schools (NJIT, Stevens and UMDNJ) access to JVNC was available 83% of the time, which is the lowest available time. They were down 17% of the time due to line problems between JVNC and NJIT. The Satellite schools (U. of Arizona, and U. of Colorado) had access to JVNC 97% of the time. The 3% missing is due to problems with the satellite dish. Following these schools are the Cambridge/Rhode Island group (MIT, Harvard, Brown). They Westine [Page 13] Internet Monthly Report June 1987 were availabe 99% of the time, the remaining 1% unavailable time was due to line problems. All these percentages were taken when jvnca was up and available. This happened 99% of the time. Gateways gateway school problems ------- ------ -------- egress NYU hardware problems pusuper Princeton disk problems iasvax IAS hardware problems super-fs Rutgers disk problems Besides these problems the gateways (all vax750's) manage to pass packets most of the time. Lines from to type problem ----- -- ---- ------- JVNC COLUMBIA 56kbps NY Bell JVNC MIT T1 Cambridge * JVNC NJIT 56kbps NJ Bell * trouble ticket still open Satellite Due to a severe power hit, we lost our Satellite Dish (hub of our satellite network). It took Vitalink 1 day to bring it up back on line. These affected the Universities of Arizona and Colorado. Traffic JVNCNET is configured like a star, and its traffic can be easily computed at the center of the star. At the center of the star are jvnca, jvncb, jvax, colo, fuzzball, terminal servers, and the front ends to the super-computers. Counting only the traffic received at jvnca, jvncb, routers and the fuzzball, we received: 84,824,128 PACKETS... a very low estimate of ONLY incoming packets, and it doesn't include all the gateways. The data for the fuzzball was provided by Doug Elias (Cornell). PSN It is estimated to be connected to jvnca by the end of July. Westine [Page 14] Internet Monthly Report June 1987 Network Monitoring Program still under development. ** for more information contact "heker@jvnca.csc.org" By Sergio Heker (heker@jvnca.csc.org) NATIONAL CENTER FOR ATMOSPHERIC RESEARCH AND UNIVERSITY SATELLITE NETWORK PROJECT The University of Miami is now isolated from the USAN 128.116 backbone. The Miami network (miami.edu) is 192.31.89. All host table entries that were 128.116.10.x should now be 192.31.89.x. The gateway machine is 128.116.10.1. Plans are under way to isolate the remaining USAN node, Wisconsin. Thanks to Mark Fedor, Cornell, gated appears to be operational on windom.ucar.edu, NCAR's Sun3 OS 3.3. Inquiries about gated for OS 3.3 should be addressed to fedor@tcgould.tn.cornell.edu. By Don Morris (morris@scdsw1.ucar.edu) PITTSBURGH SUPERCOMPUTING CENTER Both the PSC Fuzzball and PSC Gateway experienced only minor downtime during June, nearly all of it scheduled. Early in the month we reorganized part of our machine room to improve the maintainability of these machines and other networking equipment. PSC Gateway has continued to serve as the primary gateway for NSF sites. We brought up the newest version of gated on June 25. Our Fuzzball passed 14 million packets to the local net in June (up from 11 million in May) making it again one of the busiest on the backbone. We have been running the newest software version since June 19. Our Academic Affiliates net was born when gateways were installed at Penn, Temple, and Lehigh early in the month. Later on, a T1 line to the University of Maryland was brought into service, connecting a Proteon Gateway at the PSC to one of the SURAnet Proteons. This line will allow higher bandwidth connectivity between SURAnet sites and the PSC, and should soon serve as a redundant path to the ARPAnet. The gated configurations were updated on June 26 to reflect the existence of the line. During June we also developed plans for a meeting of the Academic Affiliates networking people to discuss the state of our network and its future development. By Dave O'Leary (oleary@morgul.psc.edu) PSC Communications Westine [Page 15] Internet Monthly Report June 1987 SAN DIEGO SUPERCOMPUTER CENTER We have completed the cut-over of Software Tools mentioned last month. It now supports mail to and from all our connected networks. A new version of the SRI Multinet has arrived and will be installed during early July. This newest version will add support for GATED and name server. We have been working on our Cray FTP software. While it is still in Beta, we have FTP'd files between our XMP and the one at NCSA (UIll). Transfer rates of ~20k over the NSFnet 56k lines. Our work with Apollo on Remote Procedure Call (RPC) service between the Cray and our Apollo ring continues. We have implemented the first simple step toward achieving a goal of distributing necessary computations to the XMP while displaying the results on an Apollo. We have sent two integers across the network to the Cray, which in turn computed the sum and returned the value. Though this sounds trivial, in is an important first step toward calling matrix inversion routines, database information, and anything else where resources, whether data or computational, on one machine are needed by a process on another machine. Our goal is aimed at allowing users to off load their computational needs to the Cray while using specialized workstations to display the results. None of the new lines mentioned last month has yet been installed - several are "days away" but, of course, not yet carring any bits. One addition to the list, we have been talking with JPL of leaving our old 9.6 SPAN line installed but moving it over to connect one of their Proteon's to ours. Hopefully by this time next month we will have several new paths to the door. By Paul Love (loveep@sds.sdsc.edu) NSFNET REGIONAL AFFILIATED & CONSORTIUM NETWORKS BARRNET All sites are now in "production". All but two are running 7.3 release of Proteon software. Problems continue with sporadically high error rates (at t-1 level) on dedicated microwave link,; to be addressed by new equipment Current mix of vendors (Harris, ITE) complicates problem resolution. Testing of the T-1 clear channel links to the point where the telco can be convinced to allocate resources or where link equipment fault can be isolated (in microwave case) is essential to operation of BARRNET. Cost of elementary (Bert) test equipment $6,000-7,000. We have had very good luck to date with MCI supplied microwave link between Berkeley and Davis, in contrast to some MCI experiences reported by other Westine [Page 16] Internet Monthly Report June 1987 regional nets. Because of vulnerability of end to end BARRNET service to a single link failure, we are looking into establishing a "back door" vax to vax route to Santa Cruz from Berkeley. Will also examine NEC t 18 MHz digital microwave radio gear for application to Stanford-NASA Ames link to reduce continuing telco charges. BARRNET is waiting for NSFNET backbone connection to San Diego to be activated for connection to other than SDSCnet. No representative was sent to Federation of Regional NETworks meeting in Boston due to last minute problems; will definitely be represented at next meeting. Dave Wasley of UCB will coordinate technical discussion of NSFNET backbone connection planning and connection preferences over next month. No response in writing has yet been received from Proteon on the BARRNET recommendations (Wasley-BARRNET report - attached to this status report for those interested). During recent visit of Howard Salwen, they made oral commitments but said they were not in agreement with some of the specific implementation details and would respond. ------------------------------------- Extracted From: "Modifications Required to Allow Proper Functioning of the BARRNET Internetwork Routers" May 18, 1987 Introduction The Bay Area Regional Research Network (BARRNET) was one of the first NSF sponsored regional networks to be proposed. In April of 1986, the BARRNET backbone network was scheduled to be in operation by the end of March, 1987. Proteon was selected as the vendor for the internetwork routers. Verilink T1 CSU's were selected with Proteon's cooperation except for one link which is a private microwave link (see separate report on CSU selection). Several different T1 vendors were used for the 5 links based on lowest cost. The equipment was delivered by the end of January, 1987, and the first of the T1 circuits were turned over to BARRNET in February. Operation was delayed because of problems in developing a correct configuration for version 7.2 of the Proteon p4200 internetwork routers. The main problems encountered are summarized below along with a specific request for a new feature in the p4200 software. Westine [Page 17] Internet Monthly Report June 1987 Implementation of this feature will allow BARRNET and the other NSF regional networks to function reliably and correctly. Installation History The first obstacle encountered was during testing of the routers on the T1 serial links and dummy local networks. The serial ports are required to have IP addresses. Due to the way subnet code was put into the p4200 software, if the router interconnects two networks with different numbers, only one of them can be subnetted. This meant that we could not use one of the campuses' registered class B numbers for the serial links since it would have to interface with another class B subnetted network on the other side of a router. Since all BARRNET campuses are subnetted, each serial link must be assigned a distinct non-subnetted network number. Efforts to configure the links by using unregistered net numbers and ensuring that they would not escape the BARRNET backbone proved unsuccessful because the p4200 RIP code insists on advertising all directly connected networks. Eventually the decision was made to apply to the NIC for a range of class C network numbers for the intercampus links a distinct number for each link. These have now been assigned. This problem could have been avoided entirely if the point-to-point links did not require IP network addresses, or if there was more control over advertised routes. A related but more significant obstacle was configuration of the p4200s to meet the routing requirements defined by the BARRNET technical committee: 1) BARRNET links should be used to support BARRNET traffic. In other words, BARRNET traffic should not be routed outside BARRNET unless the internal links are down. 2) BARRNET must provide a way to allow designated campuses to act as ARPANET gateways for other campuses. In addition, multiple paths to the ARPANET core from a given network number must not be advertised to the core since this will break the existing core GGP protocol. 3) There should be firewalls to ensure that bogus routing information generated in one campus does not contaminate the BARRNET backbone or the other BARRNET campuses. Furthermore, there must be some mechanism for controlling which legitimate routes are advertised to the other campuses. 4) Routing information should reflect currently available links. If a link becomes unavailable, this information should become known to the other routers and be reflected in the routing information they propagate. In addition, if secondary, alternate routes exist, they should be utilized. Westine [Page 18] Internet Monthly Report June 1987 Several approaches were attempted using both EGP and RIP. EGP was considered because the routing-interchange parameter provides the desired 'firewall' protection but had to be abandoned because it could not be made to propagate derived routes. A way of enabling this style of operation, either by setting the Autonomous System parameter to 1, by setting all routers to the same Autonomous System number, or by an explicit flag would have allowed BARRNET to become operational. However, this would still rely on statically defined routes in each router, so would fail requirement 4 above. Running only RIP on all interfaces does provide complete route information to each campus. If there was only one connection to the ARPANET core, and no other external connections, this solution would only suffer from exposure to bogus route information potentially bringing down all of BARRNet. Having exhausted all possible configurations of the existing product without success, it was decided that Proteon must be asked to modify in some way the p4200 software. After much discussion, it was decided that the simplest satisfactory modification would be a programmable route information filter. Because BARRNET operation was already late, and because we wanted to verify that this solution would be satisfactory, we decided to implement such an algorithm in a microvax inserted between the p4200 routers and the campus networks at several key BARRNET locations. This setup is definitely a temporary kludge and is being implemented ONLY to get BARRNET into operation as soon as possible. The configuration parameters and interconnection are as follows: 1) Non-forwarding nodes - those with only one BARRNET router and no other external connections - are configured to "send no routes" on their BARRNET serial interface. Since this actually means "send only a route to the directly attached network", this provides a firewall of sorts. These routers also originate a "default" route for their respective campuses. 2) The forwarding nodes, Berkeley and Stanford, are set to "send net routes". This is necessary both for logical connectivity of the two BARRNET routers at each of these sites, and for advertising the availability of routes to the ARPANET core, etc. 3) To prevent the forwarding nodes from introducing unwanted routes into the BARRNET route pool, a non-Proteon router will be inserted between the p4200s and the rest of the forwarding nodes' campus network. These routers will be programmed to implement the route information filtering algorithm described below. The fact that routers at different locations must be configured for different types of RIP behavior (#1 & #2 above) is indicative of the problems we faced in finding an adequate solution. Clearly, if any of the member campuses changes its connectivity to the outside Westine [Page 19] Internet Monthly Report June 1987 world, the BARRNET routers will most likely have to be reconfigured. This should not be the case. It is essential to eliminate the need for the extraneous routers (#3 above) as soon as possible. They are an extra cost to purchase and maintain, and they introduce an extra router delay in the topology. When the other campuses add further connectivity, they too will need this same type of route filtering capability. It is entirely unreasonable and unnecessary that these extra routers should be required: the algorithm should be built into the Proteon p4200 IP router software. Below is an outline for a routing control feature which, if added to the p4200 control software, would make the extra router unnecessary. It would also allow uniform mode of operation in each BARRNET router. Proposed Route Information Control Feature The purpose of the route information control feature is to provide programmable filtering of route information within the internetwork router. Both filtering of received ("learned") information and explicit control over propagated information are provided. With these two facilities, a reasonable firewall can be maintained between adjacent network regions. The following algorithm should be applied to any route information received or propagated by the router. It is independent of any particular route information protocol. RIP is used in the discussion merely because that is what we're using currently. For each interface, the p4200 should allow the construction of an inclusion list and an exclusion list such that, on RIP input, for any destination (route_dst) received, route_dst affects the p4200's internal routing tables only if : (no inclusion list was specified || route_dst is in the inclusion list) && (no exclusion list was specified || route_dst is not in the exclusion list ) It is an error, although benign, for there to be both an inclusion list and an exclusion list for the same interface. A similar facility would control RIP output. In this case the above test would be applied to each route_dst in the p4200's internal table and the destination would appear in the p4200's RIP broadcast on that interface only if the condition evaluated to true. The output filter list is distinct from the input filter list for each interface. Note that knowledge of routes to directly connected networks are Westine [Page 20] Internet Monthly Report June 1987 not affected by the input filter, but that advertising those routes IS affected by the output filter. Learning or advertising of a default route is also affected by this algorithm. A plausible command syntax to allow specification of this facility might be: add route_input_filter {include|exclude} add route_output_filter {include|exclude} delete route_input_filter {include|exclude} delete route_output_filter {include|exclude} where the 3rd & 4th arguments are a pair in which the first token is "exclude" for exclusion or "include" for inclusion and the second item is the destination network or subnetwork number. For example, the commands: add route_output_filter include 128.32.0.0 3 add route_output_filter include 0.0.0.0 3 would ensure that RIP packets emitted on interface 3 would contain no more than a default route and a route to network 128.32.0.0. The command: add route_input_filter exclude 0.0.0.0 1 would ensure that a default destination received on interface 1 would be ignored. It is assumed that the basic rule: "routes learned via interface N are not advertised on the same interface N" still applies. Proteon should verify this. Possible Extensions There are several obvious extensions possible, building on the basic syntax described above. These extensions are NOT required for BARRNET and are mentioned merely in case they are interesting to Proteon. One extension could supplant the existing "send [sub]net routes" command. The command could be something like: add route_output_filter include {sub|}net_routes A further extension would interpret a 6th argument as a value to be added to the RIP metric when manipulating the route. Thus a route Westine [Page 21] Internet Monthly Report June 1987 could be made to appear arbitrarily distant. This would be used to define secondary routes. Yet another extension would be to apply similar filtering to the source of the route information packets. This would be used to guard against trusting normal looking information about legitimate routes originating from "untrustworthy" network nodes. Conclusion The basic function of the Proteon p4200 as an internetwork router works well. The only serious problem with their use as routers in a regional network is the lack of a programmable firewall for routing information. The algorithm described above would provide that control feature. Comments solicited. Bill Yundt GD.WHY@forsythe.stanford.edu JVNCNET (Refer to JVNNSC backbone report) MERIT At the Merit/UMNET site, work continued this month on the network- side TCP/UDP module. The TCP/IP driver for the Interlan board is now working, allowing our Primary Communications Processors to join our Secondaries in supporting TCP/IP over Ethernet. Members of our staff adapted Phil Karn's TCP/IP package to work with the MIT Serial Line Protocol so that it can be used in the Merit environment. The central-site Technical Support Group taught a six-hour lecture course on computer networking technology on the UM campus, with heavy emphasis on the TCP/IP suite of protocols. Merit member Michigan State University was officially assigned the domain name MSU.EDU this month and has brought up a domain server to service MSU.EDU and its subdomains. At this point MSU has fourteen buildings connected to the campus-wide Ethernet; those locations house fifteen departments and over 200 nodes running DECNET, TCP/IP, and/or XNS. MSU is working on getting all of the systems on campus upgraded to compatibility with 4.3 bsd Unix. At Western Michigan University, TCP/IP systems on the campus are now able to reach all Merit-connected IP hosts. Work is progressing on getting such connections passed through to the rest of the Internet. WMU is on the verge of releasing TCP/IP to all its users and is preparing to apply to the NIC for a domain name. In the middle of the month the Merit Board of Directors authorized Merit's director to upgrade the remaining links between member universities to 56 Kbps. This will in effect bring nearly all of Westine [Page 22] Internet Monthly Report June 1987 our statewide backbone up to that speed. By Christine Wendt (christine_wendt@um.cc.umich.edu) MIDNET The MIDNET Consortium met at the University of Nebraska-Lincoln on June 22, 23, and 24. The meeting had two tracks: technical and general. The technical track was attended by the campus technical representitives and covered the care and feeding of the gateway/routers. The entire network was assembled and running in the training room. One of the most useful results of running the training this way was that the participants got some practical experience in what to do when things go wrong. In addition to vendor documentation, the participants were provided with an extensive "startup" manual prepared by Mark Meyer. A video tape of these sessions will be made available shortly. The general session covered administrative issues such as how to deal with potential corporate members and state networks, reviews of what is going on nationally in networking, the status of the various supercomputer centers, a tutorial on internet concepts, user service issues related to supporting campus supercomputer users, and a technical review of ways to connect TCP/IP networks to vendor secific campus networks. Participants left the general session with several inches of documentation on these subjects. Anyone interested in obtaining any of these documents should contact either me (DOUG@UNLCDC3 on bitnet) or Mark Meyer (MARK@UNLCDC3 on bitnet). All of the telephone lines are scheduled to be in within a few weeks. At that time we will bring the entire network up in a systematic way. We hope to be fully operational with all nodes available by August 1. By Doug Gale (doug%unlcdc3.bitnet@WISCVM.WISC.EDU) NORTHWESTNET We have just received word that the grant instrument has been signed by NSF. We will provide further detail next month. By Hellmut Golde (golde@june.cs.washington.edu) NYSERNET As of 1 July 1987, NYSERNET has the following topology with 56kbit links and Proteon Gateways. Reachability to NSFNET, ARPANET, MILNET, etc is available to each site. June additions include SUNY StonyBrook, City University of New York (CUNY), Polytechnic University (PU), and Rockefeller University. Westine [Page 23] Internet Monthly Report June 1987 Clarkson Syracuse--+ | | | Rochester--------Cornell---------RPI---Albany | | | Buffalo | | | | | Binghamton | +-------- | ------StonyBrook | | | | CUNY------NYTEL----Columbia------NYU |\ | | /| | \ | NYNEX / | | \ BNL / | | \ / | | +--------------Rockefeller| | | POLY------------------------------+ A draft version of a Simple Gateway Monitoring Protocol (SGMP) was released for comment to the INTERNET community. This was a joint effort of some regional participants and Proteon. It is hoped that deployment in NYSERNET will begin in August or September with a number of campus networks also participating. The p4200/GATED versions and a UNIX/MSDOS NOC portion are the short term objectives. A two day meeting was held at Cornell University on June 22nd and 23rd, to explore and discuss ways in which libraries can utilize the regional and national tcp/ip networks. 45 people, including representatives from the Library of Congress, RLG (The Research Libraries Group), OCLC (The Online Computer Library Center), and NYSERNET member libraries met to discuss both the technical feasibility and the desirability of a collaborative relationship. Small working groups, consisting of technical and library personnel, were asked to consider three basic proposals. These three proposals focused on: the availability of national bibliographic utilities on NYSERNET, more efficient document transmission for the purpose of interlibrary loan and, the availability of regional and special interest databases through NYSERNET. The groups proposed many methods for the immediate implementation of these proposals, each demonstrating the benefits of a joint effort among libraries and NYSERNET. A steering committee will follow through to determine the steps required for implementation of these proposals to the fullest extent possible. By Marty Schoffstall (schoff@nic.nyser.net) Westine [Page 24] Internet Monthly Report June 1987 SDSCNET (Refer to SDSC backbone report) SESQUINET As of July 2nd, we are up with gateways at three campuses: Baylor College of Medicine, Houston Area Research Center, and Rice University. At each campus is a cisco gateway, with an additional internal router in downtown Houston. All links are via 56 kb/s DDS circuits, with the exception of one 448 kb/s link between downtown Houston and HARC. The initial Rice-to-BCM link reported up a month ago served a light load, but with only one interruption (due to an inadvertant pulling of a power plug). During the month of July, we plan to install connections to three additional campuses: Texas A&M University, Texas Southern University, and the University of Houston. A technical report describing the network in more detail was written during June and is available to anyone asking me for it. By Guy Almes (almes@rice.edu) SURANET The following nets are being EGP advertised to the core on SURANET's behalf. Westine [Page 25] Internet Monthly Report June 1987 128.61. Georgia Tech 128.109. TUCC 128.150. NSF 128.154. NASA Goddard 128.163. U of Kentucky 128.164. George Washington Univ 128.167. SURAnet 128.169. U of Tennessee 128.173. Virginia Tech 192.5.57. Univ of Delaware (udel-cc) 192.5.219. Clemson 192.16.177. Univ of Alabama The T1 line between SURANET and the Pittsburgh Supercomputing Center was installed in June. After some initial problems with the Clear Channel CSU the connection appears to be working properly. The 56kb phone line to Florida State University is installed and has been working. The Florida State University network (128.186) will be EGP advertised to the core within several days. Telecommunication vendors for the Phase II SURANET sites were selected. Lines have been ordered for these sites and present plans are for all of the lines to be installed by the first of September. Discussions are continuing with several Federal Research Laboratories about establishing connections to SURANET. By Jack Hahn (hahn%umdc.bitnet@wiscvm.wisc.edu) WESTNET Connection to NSFNET - Ed Krol arranged for the connection to NSFNET through NCAR. Don Morris of NCAR is working with Hans Werner-Braun to update the routing tables. This will give us the much needed capability to Telnet to Phase II Centers in addition to JVNC, where we previously had access. Thank you to all parties involved. Gateway Proposals - Vendor responses to our RFP for IP Gateways are due to be received by July 8, 1987. We anticipate having made a selection by July 15, 1987. We will include the details of the evaluation results in next month's progress report. Interaction with AT&T - We have prepared a draft of a letter to the Denver Office of AT&T to request cost sharing for circuit costs in the interim until full NSF funding becomes available. This draft letter is presently undergoing internal CSU review. If any of you have such letters, or have contacted your local AT&T office, I would greatly appreciate it if you would share your letters and/or experiences with me. Send information to: pburns@csugreen.bitNET Westine [Page 26] Internet Monthly Report June 1987 We as yet have no NSF funding, although it is imminent, according to Dave Staudt at NSF. An overview presentation of Campus Networks, Regional Networks, the NSFNET Backbone and the Internet is being prepared for delivery at our "NSF Summer Institute on Vector Supercomputing" to be held from July 13 to July 24. At this time, we plan to provide information on national NETworking and to allow the participants to exercise the networks by running examples on extramural systems. By Pat Burns (pburns%CSUGREEN.BITNET) TASK FORCE REPORTS ------------------ APPLICATIONS -- USER INTERFACE No report received. AUTONOMOUS NETWORKS Nothing to report this month. Deborah Estrin (Estrin@oberon.usc.edu) END-TO-END SERVICES VMTP Dave Cheriton reports progress on VMTP at Stanford: further extensions to the UNIX kernel implementation of VMTP, to include security in an initial form and some performance tuning; some work on modifying the VMTP packet format for ease of hardware implementation and improved error checking; and additional work on the RPC interface to VMTP and preliminary examination of presentation levels such as XDR relative to VMTP. A new release of the VMTP Unix software is available and Steve Deering is now taking charge of incorporating bug fixes from CMU. Eric Cooper's MACH networking group at CMU has brought up the UNIX VMTP code and is thinking about how to use it in MACH. The semantic differences between MACH IPC and VMTP are significant enough to pose some non-trivial problems. Westine [Page 27] Internet Monthly Report June 1987 MULTICASTING The MACH networking group has installed the Deering/Lam IP Multicast code in the MACH kernel and has started experimenting with it. Two groups will be using it in the near future: one working on multi-RPC for a replicated file system, and another doing commit protocols for distributed atomic transactions. ISO TRANSPORT PROTOCOLS UCL has been doing some preliminary thinking on the problems of interworking between the two predominant OSI architectures, namely TP class 4 over connectionless ("ISO-IP"), and TP class 0 over X.25. The obvious solution is an application-level relay, but they are more concerned with developing a method for dealing with the addressing and flow control issues (with associated "layer violation") of concatenating these two types of service. Their X.25 systems would allow them to build a transport level relay that had access to X.25 flow control information on both the read and write sides, and passed this up to the TP4 on the other side of such a relay. PERFORMANCE UCL has been doing follow-up work to Peter Lloyd's measurements of IP and TCP performance. They have been collecting, modifying and writing tools for this purpose, and have an icmp and tcp traffic generator, together with a tool for monitoring TCP's internal and externally-visible behaviour. Current plans are to carry out a set of tests over the SATNET path, and concatenated SATNET and ARPANET paths. Preliminary measurements show that even the latest versions of TCP have problems coping with high variance of delay (2.5 secs +10 secs or -.5 secs) coupled with significant packet loss (>5%). This work is being carried out in collaboration with BBN and NTA. Phil Karn and Craig Partridge have written a paper on "Estimating Round-Trip Times in Reliable Transport Protocols", which is going to be presented at SIGCOMM in August. It's an update to Lixia's paper and analyzes ways to get around the problems she identified in the TCP algorithms. Van Jacobson reports on his efforts towards modelling and improvement of Internet performance. This information is of sufficient interest to this community to warrant its verbatim inclusion: "... Mostly I've been working on a paper about rtt modelling and the use of such models in transport protocols. The paper Westine [Page 28] Internet Monthly Report June 1987 is about 30 pages currently and maybe 1/3 finished. I was hoping to distribute a finished draft at the upcoming end2end and ietf meetings. It doesn't look like that's in the cards (I hate to write so it's going slowly). If there's interest, I could distribute a draft of the incomplete paper in the hope of getting some feedback at or before the meetings. Other things that have happened are: -- Mike Karels and I finished putting the slow-start algorithm into the official 4.3bsd system at Berkeley. Preliminary analysis of tcp statistics we are now routinely gathering on ucbvax shows approximately the same behavior as earlier measurements on lbl-rtsg: Under congested, mid-day conditions, ARPANET throughput showed a moderate improvement (20-50% better than no slow-start) and retransmitted packets showed a large improvement (factor of 4-6 fewer with slow-start). -- I've collected the notes for a slow-start rfc but, never having written an rfc, don't know what to do next. I want to get the rfc written to at least correct the attribution of the algorithm: Several people have said "Jacobson's slow-start" but it's not. John Nagle, Mike Karels and I independently described the algorithm at about the same time (round about the new year). John Nagle coined the name "slow start". I called it "soft start" (thinking of the soft-start inductors we have to use when turning on our high-voltage power supplies). Karels called it "a new timeout algorithm". Had I been up to date in my journal reading, I would have noticed an excellent article by Raj Jain in the October, 86, IEEE Journal on Selected Areas in Communications ("A Timeout-Based Congestion Control Scheme for Window Flow- Controlled Networks"). The article describes something very close to the slow-start algorithm we put into 4.3bsd tcp (Jain used the acronym CUTE for the algorithm, a name that's a bit too cute for me). Unfortunately, I'm a year behind on journals and read Jain's article about a month after implementing the tcp slow-start. But he published the algorithm long before I thought or heard of it. -- We have been trying a small variation of the RFC-793 timer algorithm that estimates both mean and variation (i.e., "beta" is replaced by a dynamic estimate). The variance computation is substantially faster than the one proposed by Stephen Edge and, surprisingly, the cost of estimating both rtt mean and variance by the new algorithm is less than the cost of estimating just mean Westine [Page 29] Internet Monthly Report June 1987 in 4.3bsd. We tested the algorithm for two months on a couple of LBL machines and found it improved performance on all types of links: Retransmits came sooner on local- net and low variance, long haul links and there were fewer spurious retransmits on high variance Arpanet and Milnet links. Based on this success, we have put the algorithm into the production 4.3tcp at Berkeley and hope to soon have data from ucbvax quantifying the improvement. I sent a short note to the tcp-ip list describing the new algorithm (in response to a query from Mark Lambert of MIT). I should probably turn that note into a short rfc. (That last sentence is actually a question.) -- With help from Bill Nowicki of Sun and Mike Karels of Berkeley, we moved the current 4.3bsd tcp code to some Sun workstations. We then compared local net performance of the original Sun code (essentially 4.2bsd tcp) with 4.3's. Ftp transfer rates between two Sun 3/50's improved a factor of two (100Kbyte/sec to 195Kbyte/sec). When measuring task-to-task tcp throughput (since ftp measurements are contaminated by file system overhead), we observed an interesting "silly window" phenomenon that could only occur at very high throughputs (i.e., when the average task-to-kernel time was approximately equal to the kernel-to-wire time). After putting a fix for this phenomenon into the 4.3 tcp, task-to-task data throughput between two Sun 3/50's climbed from 335 Kbytes/sec to 390Kbytes/sec (the 4.2 tcp code could only achieve 230 Kbytes/sec). We were pleased with this performance from a stock, 4.3bsd tcp (it corresponds to 3.3Mbps on the wire or 1/3 of the ethernet bandwidth) and hope that the people that have been saying "the tcp protocol is inherently slow" will now shut up. If necessary, kernel profiling has shown there's another factor of two to be gained but we have begun to worry that our ethernet will become unstable if pairs of workstations routinely use 60-70% of the available bandwidth (this will also be a problem if Sun ever puts a decent ethernet interface on the 25MHz 3/2xx series). -- We want to test several 4.3bsd tcp changes aimed at improving performance under high packet loss (the recent changes have been aimed at improving performance under congestion). One goal would be to get near-bandwidth performance from tcp across a high-delay, high-bandwidth satellite link. Mike Karels has made the first kernel Westine [Page 30] Internet Monthly Report June 1987 changes (of a set of several) that should allow us to expand the tcp window size beyond 64KB (oops, I'll probably need to write an rfc for a new "large window" tcp option. Looks like I'm signing up for too much writing & not enough fun stuff). I've sketched out an algorithm that should detect packet loss in much less than two round-trip-times. The algorithm includes an estimator that detects that packets are seldom reordered or duplicated on this path, then a threshhold detector that judges whether duplicate acks indicate dropped packet(s). The algorithm will always retransmit in about one rtt. Its big advantage is it doesn't require a high resolution clock for good performance. With this and Selective Acknowledgments, we should have all the pieces we need. -- I also finally realized that tcp round trip timing is trivial if you have the equivalent of an ICMP echo at the transport level. I.e., if the sender can put a timestamp (or other, arbitrary data) in the tcp header which the receiver just reflects back. This removes all the ambiguity on rtt after retransmission and saves the sender from a lot of timer processing. I sketched out how the sender would work if this option existed, then used that sketch to spec receiver behavior. I think I have an algorithm that will work well, removes a lot of code from the sender and adds just a bit of code to a receiver. But testing will probably require another tcp initial option for the receiver to announce it has the "reflect" capability (another rfc?). That's all I can think of. It's been a slow month (but exciting - my car caught fire and burned up while I was standing next to it, pumping gas). Cheers. - Van Bob Braden INTERNET ARCHITECTURE The INARC list was quiet this month. Several issues of importance, including routing and congestion-control issues, were discussed on other lists. Meeting plans will be announced shortly after the next IAB meeting in early July. Dave Mills Westine [Page 31] Internet Monthly Report June 1987 INTERNET ENGINEERING Task Force activity has slowed during the month of June. There will be a meeting in July during which several working groups will meet. Phil Gross INTERNET MANAGEMENT No meetings held in June. Cerf is late in distributing the synthesized summary of the 3 day effort in May. Vint Cerf PRIVACY Minutes of the March-April Privacy Task Force meeting were distributed to the membership. John Linn provided explanations and clarifications to several parties engaged in implementation of privacy-enhanced mail mechanisms per RFC989. Our next meeting is set for 28-29 July at NBS. The second day of this meeting will be held as a joint session with an ISO working group which is examining approaches to electronic mail privacy and security. John Linn ROBUSTNESS AND SURVIVABILITY No progress to report. Jim Mathis SCIENTIFIC COMPUTING No report received. Westine [Page 32] Internet Monthly Report June 1987 SECURITY No report received. TACTICAL INTERNET No report received. TESTING AND EVALUATION No report received. Westine [Page 33]