Supporting systolic and memory communication in iWarp

Shekhar Borkar, Craig Peterson, Jim Susman, Jim Sutton, John Urbanski, Jon Webb, Robert Cohn, George Cox, Thomas Gross, H. T. Kung, Monica Lam, Margie Levine (+2 others)
1990 Proceedings of the 17th annual international symposium on Computer Architecture - ISCA '90  
iWarp is a parallel architecture developed jointly by Carnegie Mellon University and Intel Corporation. The iWarp communication system supports two widely used interprocessor communication styles: memory comnaunication and systolic communication. This paper describes the rationale, architecture, and implementation for the iWarp communication system. The sending or receiving processor of a message can perform either memory or systolic communication. In memory communication, the entire message is
more » ... buffered in the local memory of the processor before it is transmitted or after it is received. Therefore communication begins or terminates at the local memory. For conventional message passing methods, both sending and receiving processors use memory communication. In systolic communication, individual data items are transferred as they are produced, or are used as they are received, by the program running at the processor. Memory communication is flexible and well suited for general computing; whereas systolic cortummication is efficient and well suited for speed critical applications. A major achievement of the iWarp effort is the derivation of a common design to satisfy the requirements of both systolic and memory uxnmunication styles. This is made possible by two important innovations in communication: (1) program access to communication and (2) logical channels. The former allows programs to access data as they are transmitted and to redirect portions of messages to different destinations efficiently. The latter increases the connectivity between the processors and guarantees communication bandwidth for classes of messages. These innovations have provided a focus for the iWarp architecture. The result is a communication system that provides a total bandwidth of 320 MBytes/xc and that is integrated on a single VLSI component with a 20 MFLOPS plus 20 MIPS long instruction word computation engine. The iWarp component consists of three autonomous subsystems, as depicted in Figure 1 . The computation agent, which executes programs, can deliver 20 (or 10) MFLOPS for single (or double) precision calculations plus 20 MIPS for integer/logic operations. The corrmuuI ication agent, which implements the iwarp's communication system, can sustain an aggregate intercell communication bandwidth of 320 MBytes/set by using four input and four outPut busses. The memory agent, which provides a high-bandwidth interface to the local memory, can transfer streams of data into or out of the communication agent at a rate of 160 MBytes/set. The first silicon of the iWarp component was fabricated in December 1989. It consists of approximately 650,OOCl transistors and measures about 1.4cm (551mil) on a side. Figure 2 shows a photo of the component, together with a floor plan that highlights the major units. The iWarp component operates at a frequency of 20 MHz, with the exception that the data is transferred between processors at twice that frequency (40 MHz). Three iWarp demonstration systems will be delivered to Carnegie Mellon by the Fall of 1990. Each of these systems consists of an 8x8 torus of iWarp cells, delivering more than 1.2 GFLOPS. The system can be readily expanded to include up to 1,024 cells for an aggregate computing power of over 20 GFLOPS and communication bandwidth of 160 GBytes/sec. The software for the initial iWarp systems includes optimizing compilers for C and FORTRAN as well as parallel program generators such as Apply [ 1 l] for image processing. A resident run-time system on each cell supports systolic and memory communication. Included in this run-time system are the message-passing services of the Nectar communication system, originally developed for Carnegie Mellon's Nectar network [3] . This paper describes in depth the rationale, concepts, and realization of the iWarp communication agent. In particular, we describe the common design to support both systolic and memory communications, and the innovative architectural features needed to efficiently support these different types of communication. This paper complements earlier iWarp papers on other topics: iWarp overview [5], architecture and compiler tradeoffs for the computation agent [6] . and networks that can be formed on an iWarp array [9] . General discussions on interprocessor communication methods can be found in [14]. which describes a taxonomy of communication methods and uses iWarp communication methods as part of the examples. Further discussions on systolic communication can be found in [12] . The organization of the paper is as follows. We first describe the fundamental differences between systolic and memory communication and point out that these two styles of communication each has its own merit. We then discuss the two unique architectural concepts in the iWarp communication system: (1) program access to communication and (2) logical channels. These innovations were motivated originally by systolic communication needs, but as described in Section 3, they are also useful in improving the performance of memory communication. We discuss the details of the iWarp communication system in Sections 4 through 7. starting with the physical intercell connections, the implementation of logical channels, routing and bandwidth reservation, and finally, communication agent interaction with the computation and the memory agents. We close the paper with some performance figures on the latency of communication, and some concluding remarks. Communication agent interaction with computation and memory agents There are two types of interaction between the communication agent and the rest of the system: data and control. Data in a message can be accessed directly by the computation agent or it can be spooled through memory by the memory agent. On the control side, the computation agent informs the communication agent of the events it is interested in and the communication agent notifies the computation agent when an event occurs. In addition, the computation agent can redirect messages by changing the connection of the pathways in the communication agent's logical crossbar.
doi:10.1145/325164.325116 dblp:conf/isca/BorkarCCGKLLMMPSSUW90 fatcat:u34kdq5fdjei5ngq4vop4h34hq