##
###
Search-Order Independent State Caching
[chapter]

Sami Evangelista, Lars Michael Kristensen

2010
*
Lecture Notes in Computer Science
*

State caching is a memory reduction technique used by model checkers to alleviate the state explosion problem. It has traditionally been coupled with a depth-first search to ensure termination. We propose and experimentally evaluate an extension of the state caching method for general state exploring algorithms that are independent of the search order (i.e., search algorithms that partition the state space into closed (visited) states, open (to visit) states and unmet states). General State
## more »

... oring Algorithms (GSEA). Following the definition of [5], we put in this family all algorithms that partition the state space into three sets: the set of open states that have been seen but not yet expanded (i.e., some of their successors may not have been generated); the set of closed states that have been seen and expanded; and the set of unseen states. DFS, BFS, and directed search algorithms [8] like Best-First Search and A are examples of such general state exploring algorithms. The principle of our extension is to detect cycles and guarantee termination by maintaining a tree rooted in the initial state and covering all open states. States that are part of that tree may not be removed from the cache, while others are candidates for replacement. Hence, any state that is not an ancestor in the search tree of an unprocessed state can be removed from memory. This tree is implicitly constructed by the state caching algorithm in DFS, since DFS always maintains a path from the initial state to the current state, while for GSEA it has to be explicitly built. However, our experimental results demonstrate that the overhead both in time and memory of this explicit construction is negligible. The generalized state caching reduction is implemented in our model checker ASAP [26] . We report on the results of experiments made to assess the benefits of the reduction in combination with different search orders: BFS, DFS, and several variations and combinations of these two; and with the sweep-line method [20] which we show is compatible with our generalized state caching reduction. The general conclusions we draw from these experiments are that (1) the memory reduction is usually better with DFS than with BFS although we never really experienced with BFS a time explosion; (2) BFS is to be preferred for some classes of state spaces; (3) a combination of BFS and DFS often outperforms DFS with respect to both time and memory; (4) state caching can further enhance the memory reduction provided by the sweep-line method. Structure of the paper. Section 2 presents the principle of a general state exploring algorithm, and Section 3 describes our state caching mechanism for the general algorithm. In Section 4 we put our generalized state caching method into context by discussing its compatibility with related reduction techniques. Section 5 reports on the results of experiments made with the implementation of the new algorithm. Finally, Section 6 concludes this paper. Definitions and notations. From now on we assume to be given a universe of system states S, an initial state s 0 ∈ S, a set of events E, an enabling function en : S → 2 E and a successor function succ : S × E → S; and that we want to explore the state space implied by these parameters, i.e., visit all its states. A state space is a triple (S, T, s 0 ) such that S ⊆ S is the set of reachable states and T ⊆ S × S is the set of transitions defined by: S = {s 0 } ∪ { s ∈ S | ∃s 1 , . . . , s n ∈ S with s = s n ∧ s 1 = s 0 ∧ ∀i ∈ {1, . . . , n − 1} : ∃e i ∈ en(s i ) with succ(s i , e i ) = s i+1 } T = {(s, s ) ∈ S × S | ∃e ∈ en(s) with succ(s, e) = s }

doi:10.1007/978-3-642-18222-8_2
fatcat:npqorm25krbvnaos6qsqsxsi6u