AutoSync: Learning to Synchronize for Data-Parallel Distributed Deep Learning

Hao Zhang, Yuan Li, Zhijie Deng, Xiaodan Liang, Lawrence Carin, Eric P. Xing
2020 Neural Information Processing Systems  
Synchronization is a key step in data-parallel distributed machine learning (ML). Different synchronization systems and strategies perform differently, and to achieve optimal parallel training throughput requires synchronization strategies that adapt to model structures and cluster configurations. Existing synchronization systems often only consider a single or a few synchronization aspects, and the burden of deciding the right synchronization strategy is then placed on the ML practitioners,
more » ... may lack the required expertise. In this paper, we develop a model-and resource-dependent representation for synchronization, which unifies multiple synchronization aspects ranging from architecture, message partitioning, placement scheme, to communication topology. Based on this representation, we build an endto-end pipeline, AutoSync, to automatically optimize synchronization strategies given model structures and resource specifications, lowering the bar for dataparallel distributed ML. By learning from low-shot data collected in only 200 trial runs, AutoSync can discover synchronization strategies up to 1.6x better than manually optimized ones. We develop transfer-learning mechanisms to further reduce the auto-optimization cost -the simulators can transfer among similar model architectures, among similar cluster configurations, or both. We also present a dataset that contains nearly 10000 strategy and run-time pairs on a diverse set of models and cluster specifications. † Equal contributions. The work is done while Yuan Li was an intern at Petuum Inc.
dblp:conf/nips/0025LDLCX20 fatcat:arubsyv6wvgd7cs75n3ywmd73e