Optimization of checkpointing-related I/O for high-performance parallel and distributed computing
Journal of Supercomputing
Checkpointing, the process of saving program/application state, usually to a stable storage, has been the most common fault-tolerance methodology for highperformance applications. The rate of checkpointing (how often) is primarily driven by the failure rate of the system. If the checkpointing rate is low, fewer resources are consumed but the chance of high computational loss is increased and vice versa if the checkpointing rate is high. It is important to strike a balance, and an optimum rate
... checkpointing is required. In this paper, we analytically model the process of checkpointing in terms of mean-time-between-failure of the system, amount of memory being checkpointed, sustainable I/O bandwidth to the stable storage, and frequency of checkpointing. We identify the optimum frequency of checkpointing to be used on systems with given specifications thereby making way for efficient use of available resources and maximum performance of the system without compromising on the fault-tolerance aspects. Further, we develop discrete-event models simulating the checkpointing process to verify the analytical model for optimum checkpointing. Using the analytical model, we also investigate the optimum rate of checkpointing An earlier version of this paper appeared in R. Subramaniyan et al. for systems of varying resource levels ranging from small embedded cluster systems to large supercomputers.