##
###
Solving Large Retrograde Analysis Problems Using a Network of Workstations
[report]

Lake, Robert; Schaeffer, Jonathan; Lu, Paul

1993

Chess endgame databases, while of important theoretical interest, have yet to make a significant impact in tournament chess. In the game of checkers, however, endgame databases have played a pivotal role in the success of our World Championship challenger program Chinook. Consequently, we are interested in building databases consisting of hundreds of billions of positions. Since database positions arise frequently in Chinook's search trees, the databases must be accessible in real-time, unlike
## more »

... real-time, unlike in chess. This paper discusses techniques for building large endgame databases using a network of workstations, and how this data can be organized for use in a real-time search. Although checkers is used to illustrate many of the ideas, the techniques and tools developed are also applicable to chess. This work has been applied to the domain of computer checkers (8 × 8 draughts), which is an interesting point of comparison for computer chess. In checkers, since there are only checker and king pieces, all games play into a limited set of endgame classes. Also, the lower branching factor of checkers trees and the forced captures of the game result in deeper search trees than in chess. Although the root of the tree may be far from the endgame, the leaf nodes already may be in the databases. Consequently, the utility of the endgame databases is higher in checkers than chess. Computing the checkers endgame databases with the resources available to us has been a challenge. The problem requires excessive memory, time, I/O and mass storage to solve using either the sequential version of Thompson's algorithm [4, 13] or Stiller's vector-processing method [12] . Of course, as computers get more powerful, many of these problems will be overcome, but we want the databases now! Other approaches to solving the problem, such as proving some properties of the search space (an interesting example can be found in [2]), have not been successful. The memory problem is addressed by decomposing the 150 billion positions we want to solve into small pieces (10 million positions) and solving them individually. The time problem is solved using a distributed network of heterogeneous workstations. The I/O problem is (partially) solved by dividing the computation into distinct phases or passes to eliminate redundant I/O. The mass storage problem is solved by an application-dependent compression algorithm that also allows real-time access. Interestingly, the problem has been sufficiently decomposed that a single modern workstation can be used to solve the entire problem. The same techniques can be applied to building chess endgames databases. More generally, the construction of endgame databases may have a greater impact than just in computer game playing. Many problems in mathematics and the sciences require finding the optimal solution in a large combinatorial search space. In essence, the construction of an endgame database is a backwards search from the solution. When combined with a forward search tree, as with computer games, the optimal solution may be found in less time. Therefore, some types of optimization problems can benefit from the approach taken in this paper. The success of the Chinook checkers program (8 × 8 draughts) is largely due to its endgame databases [10] . The project began in June 1989 with the short-term goal of developing a program capable of defeating the human world champion and the long-term goal of solving the game. Chinook has achieved significant successes and also has had some setbacks. It was the first program to earn the right to play a reigning world champion for the title by placing second to then Retrograde analysis can be used to help solve a large combinatorial search space by building the optimal solution in a bottom-up manner (searching from the solution backwards towards the problem statement). With an appropriate top-down search algorithm (searching from the problem statement forwards towards a solution), a better approximate solution, or possibly the optimal solution, can be obtained. For our problem domain, solving the game of checkers, the search space consists of 5 × 10 20 positions. The construction of a checkers endgame database is simply the computation of a transitive closure. Each position is a member of either the set of wins, losses or draws. Once computed, the classification of a database entry represents perfect knowledge as to the theoretical value of that position. Since a checkers database is a lookup table test for set membership, the simple techniques discussed in this paper can be applied to other problem domains. Initially, all positions are given the value of unknown. Some of the positions can be classified as either a win or a loss according the the rules of the game. For example, a player without any pieces on the board (i.e. without material) or without a legal move is a loss in checkers. The set membership of the other positions depends on the membership of positions reachable by the legal moves of the game. Given a sufficient amount of information, the classification of a position can be changed from unknown to either a win, loss or draw. Specifically, if the side to play has a legal move that leads to a position that has already been classified as a win for itself, then the current position is also a win. If the side to play only has legal moves leading to positions that are wins for the opponent, then its current position is a loss for itself. The transitive closure is complete when there is insufficient information to change the value of any other positions. At that point, all of the unknown positions are declared to be draws since neither player can force a win. In theory, if all of the leaf nodes of the minimax game tree are from the endgame databases, then there is no error in the evaluation of the root position. Consequently, it may be possible to compute the game theoretic value of the game of checkers using the perfect knowledge of the endgame databases. In practice, such as playing a game under real-time constraints, limitations on time and space may not allow the search to extend all of the leaf nodes into the endgame databases. For each leaf node not in the databases, a heuristic evaluation function is used to assess the position. Each application of the evaluation function introduces the possibility of error. A combination of leaf nodes from the databases and from the evaluation function is the most common situation. As the percent of leaf nodes taken from the databases increases, the accuracy of the search result improves. The goal of our project is to solve 150 billion checkers positions. A naive approach to tackling the problem would exceed the computational, storage and input-output (I/O) facilities of most current day computers. Of course, computers continue to increase in their capabilities, but solving the problem with the current technology requires a more refined approach. Furthermore, although there may exist computers capable of dealing with the size of the endgame database problem, they are neither affordable or available to this team of researchers. The design tradeoffs and issues relating to solving large problems with limited resources is both challenging and important. There will always be problems that are technologically feasible, but too large to solve with the resources available. In fact, a simple implementation of retrograde analysis would be CPU-bound, memory-bound and I/O-bound, the classic triad of a large computational problem. In the following sections, each of these bottlenecks is addressed. Basic Algorithm All endgame databases are built according to the number of pieces on the board. Constructing an N-piece endgame database requires enumerating all positions with N pieces and computing whether each position is a win, loss, or draw. All database entries describe positions with Black to move. White to move results are determined by reversing the board, changing the colors of the pieces, and retrieving the appropriate Black to move result. An N-piece database is computed using an iterative algorithm, building on the results of the previously computed 1, 2, ..., (N-1)-piece databases, as per a backwards search. Initially all N-piece positions are viewed as having a value of UNKNOWN. Each iteration scans through a subset of the positions to determine whether there is enough information to change a position's value to WIN, LOSS or DRAW. This is idea behind Thompson's original algorithm [13] . The execution strategy of the first iterative pass depends on an important rule of checkers: a capture must be played when one or more capture moves are present among the available legal moves. As a result, the first pass determines the value of all capture positions and defers the rest for later passes (analogous to resolving all mate positions in chess). Since a capture leads to a position with N-1 or fewer pieces, each N-piece capture position is resolved by retrieving the values from the previously computed databases for N-1 or fewer pieces. The capture position is then assigned the highest value retrieved from the previously computed positions. These values are ranked in descending order as win, draw and loss (hence 2 bits of storage per position †). Approximately half of the positions in a database are capture positions. In Appendix A, the pseudo-code for the DoCaptures() routine is given. The second and subsequent iterations through the database resolve only non-capture positions. For each position considered, all the legal moves are generated. Each move is executed and the resulting position's value is retrieved from the current N-piece database. The unknown position is assigned a value only when one of the legal moves results in a win or all legal moves have been resolved. The program iterates until no more Npiece positions can be resolved (DoNonCaptures() in Appendix A). At that point, all remaining unknown positions are set to draws. This algorithm is summarized in Figure 1 . This method resolves positions in order of least to most moves required to play into a lesser database. Thus, the algorithm can be applied to problems such as finding all wins in 1 move, then 2, then 3, and so on. There are, in fact, two opposite approaches to resolving unknown positions. The "forward" approach described above takes each unresolved position, generates its successor positions, and from these tries to determine the value of the parent. The "backward" approach takes a resolved position, uses a reverse move generator to find its predecessor _ ______________ Some implementations use 1 bit per position. The justification for 2 bits is given in Section 3.3.

doi:10.7939/r3j09w57m
fatcat:fkrywweuc5gbhikm36xywq7sw4