Building and measuring a high performance network architecture [report]

William T.C. Kramer, Timothy Toole, Chuck Fisher, Jon Dugan, David Wheeler, William R Wing, William Nickless, Gregory Goddard, Steven Corbato, E. Paul Love, Paul Daspit, Hal Edwards (+13 others)
2001 unpublished
Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and
more » ... onstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future. 1/7/1 In essence, SCinet is a self-contained ISP that peers with all the major research and government networks. Commodity Network At the first level, several days before the show started, there was a commodity Internet network to connect offices, the Education Program, and the email facilities. This network was expanded to include all the meeting rooms and lecture areas, including areas that webcast sessions, totaling more than 40 locations and over 300 drops. The network spanned about 200,000 sf over three floors of the DCC. Most of the network drops were connected using existing Cat-5 cables installed in the DCC to switches at 100 gigabits per second. These switches connected to the commodity router via multimode fiber. One connection for the email services was made at 1 Gbps using multimode fiber. The DCC had an external 12 Mbs link provided by Qwestlink. For most conferences, this data rate is more than enough to support multiple events at any one time. Connections within the DCC were accumulated at an optical switch that was connected to a Cisco router managed by Qwest. The traffic then flowed over the Qwest backbone. Since the commodity network had to be up before the full SCinet network, and had to operate until the conference closed, it was decided that the best commodity service would be provided using the DCC external connections. The commodity network connected to a single SCinet router, provided by Foundry Networks, a NetIron 800 router. This router, denoted Conf-Rtr-1, peered with the DCC Cisco router via BGP. There were three routers involved in the commodity BGP peering: Logically, the BGP peering was between the SCINet Foundry and the Qwestlink Juniper because the DCCC Cisco didn't have enough memory for all the tables. The DCCC Cisco only had a couple of static routes, including one pointing to 140.221.128.0/17 (SCINet) and default pointing to Qwestlink. Figure 2 shows how they were physically connected.
doi:10.2172/785261 fatcat:albbfq3ykbc4pisfshmnotct4m