Write a Blog >>

Load imbalance pervasively exists in distributed deep learning training systems, either caused by the inherent imbalance in learned tasks or by the system itself. Traditional synchronous Stochastic Gradient Descent (SGD) achieves good accuracy for a wide variety of tasks, but relies on global synchronization to accumulate the gradients at every training step. In this paper, we propose eager-SGD, which relaxes the global synchronization for decentralized accumulation. To implement eager-SGD, we propose to use two partial collectives: solo and majority. With solo allreduce, the faster processes contribute their gradients eagerly without waiting for the slower processes, whereas with majority allreduce, at least half of the participants must contribute gradients before continuing, all without using a central parameter server. We theoretically prove the convergence of the algorithms and describe the partial collectives in detail. Experimental results on load-imbalanced environments (CIFAR-10, ImageNet, and UCF101 datasets) show that eager-SGD achieves 1.27x speedup over the state-of-the-art synchronous SGD, without losing accuracy.

Mon 24 Feb

Displayed time zone: Tijuana, Baja California change

10:55 - 12:35
Machine Learning/Big Data (Mediterranean Ballroom)Main Conference
Chair(s): Shuaiwen Leon Song University of Sydney
10:55
25m
Talk
Optimizing Batched Winograd Convolution on GPUs
Main Conference
Da Yan Hong Kong University of Science and Technology, Wei Wang Hong Kong University of Science and Technology, Xiaowen Chu Hong Kong Baptist University
11:20
25m
Talk
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations
Main Conference
Shigang Li ETH Zurich, Tal Ben-Nun Department of Computer Science, ETH Zurich, Salvatore Di Girolamo Department of Computer Science, ETH Zurich, Dan Alistarh IST Austria, Torsten Hoefler Department of Computer Science, ETH Zurich
11:45
25m
Talk
Scalable Top-K Retrieval with Sparta
Main Conference
Gali Sheffi Technion - Israel, Dmitry Basin Yahoo Research, Edward Bortnikov Yahoo Research, David Carmel Amazon, Idit Keidar Technion - Israel institute of technology
12:10
25m
Talk
waveSZ: A Hardware-Algorithm Co-Design of Efficient Lossy Compression for Scientific Data
Main Conference
Jiannan Tian University of Alabama, Sheng Di Argonne National Laboratory, Chengming Zhang University of Alabama, Xin Liang , Sian Jin University of Alabama, Dazhao Cheng University of North Carolina at Charlotte, Dingwen Tao University of Alabama, Franck Cappello Argonne National Laboratory