Annotating user-defined abstractions for optimization

D. Quinlan, M. Schordan, R. Vuduc, Qing Yi
2006 Proceedings 20th IEEE International Parallel & Distributed Processing Symposium  
Although conventional compilers implement a wide range of optimization techniques, they frequently miss opportunities to optimize the use of abstractions, largely because they are not designed to recognize and use the relevant semantic information about such abstractions. In this position paper, we propose a set of annotations to help communicate high-level semantic information about abstractions to the compiler, thereby enabling the large body of traditional compiler optimizations to be
more » ... to the use of those abstractions. Our annotations explicitly describe properties of abstractions that are needed to guarantee the applicability and profitability of a broad variety of such optimizations, including memoization, reordering, data layout transformations, and inlining and specialization. Edge * get edge(int i ); Node * get node(int i); 11 int node size (); int edge size (); }; 13 void compute(Mesh& m, double a) { 15 for ( int i = 0; i < m.edge size(); ++i) { Edge * e = m.get edge(i); Figure 2. compute() memoized. 1 void compute optimized(Mesh& m, double a) { vector eval precomp (m.node size ()); 3 for ( int i = 0; i < m.node size(); ++i) { Node * n = m.get node(i); 5 eval precomp[n−>id()] = n−>eval(a); } 7 for ( int i = 0; i < m.edge size(); ++i) { 9 Edge * e = m.get edge(i); // ... 11 double x = eval precomp[e−>node1()−>id()]; double y = eval precomp[e−>node2()−>id()]; 13 bar (x, y); // ... 15 } }
doi:10.1109/ipdps.2006.1639722 dblp:conf/ipps/QuinlanSVY06 fatcat:7naceka5qvfldnbdhklqd2r2me