Bounding the optimum of constraint optimization problems
Lecture Notes in Computer Science
Solving constraint optimization problems is computationally so expensive that it is often impossible to provide a guaranteed optimal solution, either when the problem is too large, or when time is bounded. In these cases, local search algorithms usually provide good solutions. However, and even if an optimality proof is unreachable, it is often desirable to have some guarantee on the quality of the solution found, in order to decide if it is worthwile to spend more time on the problem. This
... r is dedicated to the production of intervals, that bound as precisely as possible the optimum of Valued Constraint Satisfaction Problems (VCSP). Such intervals provide an upper bound on the distance of the best available solution to the optimum i.e., on the quality of the optimization performed. Experimental results on random VCSPs and real problems are given. Motivations The Constraint Satisfaction Problem framework is very convenient for representing and solving various problems, related to Artificial Intelligence and Operations Research: scheduling, assignment, design. . . But many real overconstrained problems are more faithfully translated to Constraint Optimization Problems, and, more precisely, to Valued Constraint Satisfaction Problems . The classical objective of constraint satisfaction is replaced by an objective of constraint violation minimization. Both theoretical and practical observations show that these optimization problems are much more difficult to tackle than satisfaction problems. The construction of a provenly optimal solution is often out of reach. It is the case, either when the problem is too large and difficult, or when time and resources are bounded. Exact or complete methods, such as Branch and Bound, are able to produce both an optimal solution, and a proof of optimality. But, because of their exponential worst-case behavior, they may be extremely time consuming. Moreover, it has been experimentally observed that, due to their systematic way of exploring the search space, the quality of their intermediate solutions is usually very poor  . Due to their opportunist way of exploring the search space, approximate or incomplete methods, based on heuristic or stochastic Local Search mechanisms, usually provide good solutions within a reasonable time. Naturally, the value of the best solution found so far is an upper bound on the optimum. But these algorithms do not provide any information about the distance between this value and the optimum. By themselves, they cannot prove the optimality of a solution and may waste a lot of time trying to improve an already optimal solution: a wasteful behavior if several problems have to be tackled in a time or resource-bounded context. This situation can be largely improved by computing non trivial lower bounds on the optimum: the distance between, on the one hand, the value of the best solution found so far, and, on the other hand, the best lower bound produced so far, provides an upper bound on the distance to the optimum. This information can be used for deciding, either to stop the optimization process, or to spend more time, in order to get a tighter bounding of the optimum. This paper is organized as follows: in Section 1, we introduce the Valued Constraint Satisfaction Problem framework; in Section 2, we show how problem simplifications can be used for producing lower bounds; we finally present, in Section 3, the results of the experiments which have been performed, both on random VCSPs in a time-bounded context, and on large real problems.