Branch-and-Reduce Exponential/FPT Algorithms in Practice: A Case Study of Vertex Cover [chapter]

Takuya Akiba, Yoichi Iwata
2014 2015 Proceedings of the Seventeenth Workshop on Algorithm Engineering and Experiments (ALENEX)  
We investigate the gap between theory and practice for exact branching algorithms. In theory, branch-and-reduce algorithms currently have the best time complexity for numerous important problems. On the other hand, in practice, state-of-the-art methods are based on different approaches, and the empirical efficiency of such theoretical algorithms have seldom been investigated probably because they are seemingly inefficient because of the plethora of complex reduction rules. In this paper, we
more » ... gn a branch-and-reduce algorithm for the vertex cover problem using the techniques developed for theoretical algorithms and compare its practical performance with other state-of-the-art empirical methods. The results indicate that branch-and-reduce algorithms are actually quite practical and competitive with other state-ofthe-art approaches for several kinds of instances, thus showing the practical impact of theoretical research on branching algorithms. 1 conduct experiments on a variety of instances and compare our algorithm with two state-of-theart empirical methods: a branch-and-cut method by a commercial integer programming solver, CPLEX, and a branch-and-bound method called MCS [19] . Although the rules in our algorithm are not designed for specific instances but are developed for theoretical purposes, the results show that our algorithm is actually quite practical and competitive with other state-of-the-art approaches for several cases. Relations to Theoretical Research on Exact Algorithms for Vertex Cover We introduce recent theoretical research on exact algorithms for Vertex Cover. Two types of research exists: exact exponential algorithms, which analyze the exponential complexity with regard to the number of vertices, and FPT algorithms, which introduce a parameter to the problem and analyze the parameterized complexity with regard to both the parameter and the graph size. First, we explain how the algorithms are designed and analyzed using a simple example. Let us consider a very simple algorithm that selects a vertex v and branches into two cases: either 1) including v to the vertex cover or 2) discarding v while including its neighbors to the vertex cover. Apparently, this algorithm runs in O * (2 n ) 1 time. Can we prove a better complexity? The answer to this question would be No. When a graph is a set of n isolated vertices, the algorithm needs to branch on each vertex, which takes Ω * (2 n ) time. To avoid this worst case, we can add the following reduction rule: if a graph is not connected, we can solve each connected component separately. Now, we can assume that v has a degree of at least one. Then, after the second case of the branching, where v is discarded and its neighbors are included, the number of vertices to be considered decreases by at least two. Thus, by solving the recurrence of T (n) ≤ T (n − 1)+ T (n − 2), we can prove a complexity of O * (1.6181 n ). The worst case occurs when we continue to select a vertex of degree one. Here, we note that if n is at least three, a vertex of degree at least two always exists. Thus, by adding the following branching rule, we can avoid this worst case: select a vertex of the maximum degree. Now, we can assume that v has a degree of at least two, and by solving the recurrence of T (n) ≤ T (n − 1) + T (n − 3), we can prove the complexity of O * (1.4656 n ). We continue this process and create increasingly complex rules to avoid the worst case and improve the complexity. Thus, currently, the theoretically fastest algorithms involve a number of complicated rules. Although much of the current research uses a more sophisticated analytical tool called the measure and conquer analysis [7] , the design process is basically the same. As for exact exponential algorithms, since Fomin, Grandoni, and Kratsch [7] gave an O * (1.2210 n )time algorithm by developing the measure and conquer analysis, several improved algorithms have been developed [13, 3, 22] . Since improving the complexity on sparse graphs is known to also improve the complexity on general graphs [3], algorithms for sparse graphs also have been well studied [17, 3, 22] . Among these algorithms, we use rules from the algorithm for general graphs by Fomin et al. [7] , and the algorithm for sparse graphs by Xiao and Nagamochi [22] . These rules are also contained in many of the other algorithms. We also develop new rules inspired from the satellite rule presented by Kneis, Langer, and Rossmanith [13]. Since our algorithm completely contains the rules of the algorithm by Fomin et al., our algorithm also can be proved to run in O * (1.2210 n ) time. On FPT algorithms, Vertex Cover has been studied under various parameterizations. Among them, the difference between the LP lower bound and the IP optimum is a recently developed pa-1 The O * notation hides factors polynomial in n.
doi:10.1137/1.9781611973754.7 dblp:conf/alenex/AkibaI15 fatcat:gdn72zraezb4vgok77s5rssd5y