A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2003; you can also visit <a rel="external noopener" href="http://dark-panic.rutgers.edu:80/~edpin/qualifier/papers/compilers.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
<i title="Institute of Electrical and Electronics Engineers (IEEE)">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/dsrvu6bllzai7oj3hktnc5yf4q" style="color: black;">Computer</a>
I nstruction-level parallelism allows a sequence of instructions derived from a sequential program to be parallelized for execution on multiple pipelined functional units. If industry acceptance is a measure of importance, ILP has blossomed. It now profoundly influences the design of almost all leadingedge microprocessors and their compilers. Yet the development of ILP is far from complete, as research continues to find better ways to use more hardware parallelism over a broader class of<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/2.642817">doi:10.1109/2.642817</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sqa3irdg3zcqzftmok3rpsv65a">fatcat:sqa3irdg3zcqzftmok3rpsv65a</a> </span>
more »... tions. WHY ILP? With ever-increasing clock speeds, leading-edge microprocessors are approaching technological limits to processor cycle time. Using ILP improves performance and exploits the additional chip area provided by rapidly increasing chip density. ILP's key advantage is that it exploits parallelism without requiring the programmer to rewrite existing applications. ILP's success is due to its ability to overlap the execution of individual operations without explicit synchronization. A wealth of opportunities to parallelize programs exist at the fine-grained operation level. ILP's automatic nature is attractive because it works with current software programs. Despite the rise of novel computer architectures, such as multiprocessors, today's applications are still programmed sequentially, and many will never be rewritten. Sequential performance has enormous economic value, which has broadly stimulated commercial interest in ILP. In a hardware-centric implementation, ILP on a superscalar processor executes a sequential instruction stream. Hardware dynamically detects opportunities for parallel execution and schedules operations to exploit available resources. ILP in a software-centric approach employs a very long instruction word (VLIW) processor and relies on a compiler to statically parallelize and schedule code. Such a partitioning sim-plifies the hardware. (For a short discussion of hardware architectures, see the "Architectures and ILP" sidebar.) As chip densities increase, ILP techniques that were previously of use only on supercomputers and minisupercomputers are now broadly applicable to inexpensive and general-purpose computers. Yet exploiting ILP across a diverse set of performance-critical applications will require renewed emphasis on the role of the compiler. CURRENT ROLE OF ILP COMPILERS ILP compilers enhance performance by customizing application code to a target processor. Compilers use global knowledge of the application program not readily available to a hardware interpreter as well as a description of the target machine architecture to guide the machine-specific optimizations. Compiler-performed static optimization and scheduling eliminates the complex processing needed to parallelize code, which the hardware would otherwise perform during execution. ILP compilation is now increasingly important across low-to high-end products as well as generaland special-purpose applications. Compiler techniques originally developed for more specialized products such as minisupercomputers now find more broad use in general-purpose workstations. 1,2 Newly designed embedded processors by Philips, Texas Instruments, and others provide performance using ILP compiler techniques. These uses show that ILP compiler research has already had some commercial success. ILP compilers originally targeted high performance for loop-oriented scientific applications in which parallelism was abundant and easily recognized. These compilers use trace scheduling or software pipelining to accelerate a broad class of loops with greater efficiency than earlier vector processors. 3,4
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20030711223156/http://dark-panic.rutgers.edu:80/~edpin/qualifier/papers/compilers.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/62/0b/620b59fc90c62f9852e48dba96b06c6290f55dd0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/2.642817"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>