Rigorous Estimation of Floating-Point Round-off Errors with Symbolic Taylor Expansions [chapter]

Alexey Solovyev, Charles Jacobsen, Zvonimir Rakamarić, Ganesh Gopalakrishnan
2015 Lecture Notes in Computer Science  
Rigorous estimation of maximum floating-point round-off errors is an important capability central to many formal verification tools. Unfortunately, available techniques for this task often provide overestimates. Also, there are no available rigorous approaches that handle transcendental functions. We have developed a new approach called Symbolic Taylor Expansions that avoids this difficulty, and implemented a new tool called FPTaylor embodying this approach. Key to our approach is the use of
more » ... orous global optimization, instead of the more familiar interval arithmetic, affine arithmetic, and/or SMT solvers. In addition to providing far tighter upper bounds of round-off error in a vast majority of cases, FPTaylor also emits analysis certificates in the form of HOL Light proofs. We release FPTaylor along with our benchmarks for evaluation. Abstract. Rigorous estimation of maximum floating-point round-off errors is an important capability central to many formal verification tools. Unfortunately, available techniques for this task often provide overestimates. Also, there are no available rigorous approaches that handle transcendental functions. We have developed a new approach called Symbolic Taylor Expansions that avoids this difficulty, and implemented a new tool called FPTaylor embodying this approach. Key to our approach is the use of rigorous global optimization, instead of the more familiar interval arithmetic, affine arithmetic, and/or SMT solvers. In addition to providing far tighter upper bounds of round-off error in a vast majority of cases, FPTaylor also emits analysis certificates in the form of HOL Light proofs. We release FPTaylor along with our benchmarks for evaluation. The final publication was accepted to FM 2015 and is available at link.springer. com 2 Key to Our Approach. In a nutshell, the aforesaid difficulties arise because of a tool's attempt to abstract the "difficult" (nonlinear or transcendental) functions. Our new approach called Symbolic Taylor Expansions (realized in a tool FPTaylor) side-steps these issues entirely as follows. (1) We view round-off errors as "noise," and compute Taylor expansions in a symbolic form. (2) In these symbolic Taylor forms, all difficult functional expressions appear as symbolic coefficients; they do not need to be abstracted. (3) We then apply a rigorous global maximization method that has no trouble handling the difficult functions and can be executed sufficiently fast thanks to the ability to trade off accuracy for performance. Let us illustrate these ideas using a simple example. First, we define absolute round-off error as err abs = |ṽ − v|, whereṽ is the result of floating-point computations and v is the result of corresponding exact mathematical computations. Now, consider the estimation of worst case absolute round-off error in t/(t + 1) computed with floating-point arithmetic where t ∈ [0, 999] is a floating-point number. (Our goal here is to demonstrate basic ideas of our method; pertinent background is in Sect. 3.) Let and ⊕ denote floating-point operations corresponding to / and +. Suppose interval abstraction were used to analyze this example. The roundoff error of t ⊕ 1 can be estimated by 512 where is the machine epsilon (which bounds the maximum relative error of basic floating-point operations such as ⊕ and ) and the number 512 = 2 9 is the largest power of 2 which is less than 1000 = 999 + 1. Interval abstraction replaces the expression d = t ⊕ 1 with the abstract pair ([1, 1000], 512 ) where the first component is the interval of all possible values of d and 512 is the associated round-off error. Now we need to calculate the round-off error of t d. It can be shown that one of the primary sources of errors in this expression is attributable to the propagation of error in t ⊕ 1 into the division operator. The propagated error is computed by multiplying the error in t ⊕ 1 by t d 2 . 1 At this point, interval abstraction does not yield a satisfactory result since it computes t d 2 by setting the numerator t to 999 and the denominator d to 1. Therefore, the total error bound is computed as 999 × 512 ≈ 512000 . The main weakness of the interval abstraction is that it does not preserve variable relationships (e.g., the two t's may be independently set to 999 and 0). In the example above, the abstract representation of d was too coarse to yield a good final error bound (we suffer from eager composition of abstractions). While affine arithmetic is more precise since it remembers linear dependencies between variables, it still does not handle our example well as it contains division, a nonlinear operator (for which affine arithmetic is known to be a poor fit). A better approach is to model the error at each subexpression position and globally solve for maximal error-as opposed to merging the worst-cases of local abstractions, as happens in the interval abstraction usage above. Following this
doi:10.1007/978-3-319-19249-9_33 fatcat:ujzjmfvwqvawlkh2yyfji2efqi