A technique for recursive invariance detection and selective program specialization [chapter]

F. Giannotti, M. Hermenegildo
1991 Lecture Notes in Computer Science  
This paper presents a technique for achieving a class of optimizations related to the reduction of checks within cycles. The technique uses both Program Transformation and Abstract Interpretation. After a first pass of an abstract interpreter which detects simple invariants, program transformation is used to build a hypothetical situation that simplifies some predicates that should be executed within the cycle. This transformation implements the heuristic hypothesis that once conditional tests
more » ... old they may continue doing so recursively. Specialized versions of predicates are generated to detect and exploit those cases in which the invariance may hold. Abstract interpretation is then used again to verify the truth of such hypotheses and confirm the proposed simplification. This allows optimizations that go beyond those possible with only one pass of the abstract interpreter over the original program, as is normally the case. It also allows selective program specialization using a standard abstract interpreter not specifically designed for this purpose, thus simplifying the design of this already complex module of the compiler. In the paper, a class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application areas such as floundering detection and reducing run-time tests in automatic logic program parallelization. The analysis of the examples presented has been performed automatically by an implementation of the technique using existing abstract interpretation and program transformation tools. This paper presents a technique (developed independently of [26] and [17] ) which attempts similar results but by quite different means. We assume the existence of an abstract interpreter. We also assume that this interpreter uses a domain that is adequate for the type of optimizations that the compiler performs. Based on these assumptions, and rather than asking the compiler for a "wish-list" of desired optimizations, we develop an abstract domain related notion ("abstract executability") which will guide the process of specialization and invariant detection. Also, rather than modifying the abstract interpreter to be aware of the specialization process we leave the interpreter unmodified. 3 Rather, we propose to perform the program specialization and simplification steps externally to the interpreter while still achieving our objective of extracting repetitive run-time tests to the outermost possible level. This is achieved by using program transformation to build a hypothetical situation that would reduce the predicates to be executed in the cycle according to their abstract executability and then using the abstract interpreter again to verify the truth of the hypothesis and (possibly) confirm the proposed simplification. Consider the following conditional: p(X) ← q(X), cond(test(X), p(X), r(X)). Program transformation can be used to build a hypothetical situation that reduces the predicates to be executed in the cycle. For example, we can hypothesize that once the test in the conditional succeeds, it will always succeed. A correct program transformation under this hypothesis would be p(X) ← q(X), cond(test(X), p1(X), r(X)) p1(X) ← q(X), p1(X). Of course, this transformation is legal if we are sure that test(X) will remain true in all the recursive calls of p in the then branch. If that is the case the relevance of the obtained optimization will depend on the complexity of test(X) and on the number of nested recursive calls in the then branch. The interesting issue is whether the abstract interpretater can derive if test(X) will be true in all the recursive calls. This depends on the capabilities of the abstract interpreter and precisely these capabilities can be used as a guideline for when to perform the transformation. I.e. given an abstract interpreter, a class of predicates that can be executed directly (reduced to true or false) on the information generated by the abstract interpreter can be identified. Then, rather than blindly performing hypothesis and transformations this class is used to select only potentially useful transformations. The abstract interpreter, run for a second time on the transformed program to verify the truth of the hypothesis formulated, has then a chance of being successful in its task. Our conviction is that such classes of predicates can be easily found for each abstract interpreter. The idea of leaving the abstract interpreter unmodified is motivated by the consideration that the interpreter is probably already a quite complex module which may be quite difficult to modify and that therefore there is practical advantage in using this module as is. This appears to be the case with most current implementations. In addition, this allows the use of several different abstract interpreters with only minor modifications to the rest of the system. Our description, thus, will be quite independent from the abstract interpreter, which will be considered as a "black box." The paper is organized as follows: the following section (section 2) recalls the basic ideas of Abstract Interpretation and introduces the concept of and-or graph to represent the result of an abstract interpretation process. Section 2 presents a class of predicates that may be executed at compile-time by using the information collected by a generic abstract interpreter. In sections 3 and 4 the and-or graph representation is exploited to describe the basic program transformation and optimization techniques proposed, based on the concept of abstract executability. The possibility of performing these optimizations using abstract interpretation and program transformation occurred to us while considering their implementation in the context of the abstract interpreter
doi:10.1007/3-540-54444-5_109 fatcat:5uwkpzdhrrhvfpogoozwa2bk5a