Write a Blog >>

Load imbalance pervasively exists in distributed deep learning training systems, either caused by the inherent imbalance in learned tasks or by the system itself. Traditional synchronous Stochastic Gradient Descent (SGD) achieves good accuracy for a wide variety of tasks, but relies on global synchronization to accumulate the gradients at every training step. In this paper, we propose eager-SGD, which relaxes the global synchronization for decentralized accumulation. To implement eager-SGD, we propose to use two partial collectives: solo and majority. With solo allreduce, the faster processes contribute their gradients eagerly without waiting for the slower processes, whereas with majority allreduce, at least half of the participants must contribute gradients before continuing, all without using a central parameter server. We theoretically prove the convergence of the algorithms and describe the partial collectives in detail. Experimental results on load-imbalanced environments (CIFAR-10, ImageNet, and UCF101 datasets) show that eager-SGD achieves 1.27x speedup over the state-of-the-art synchronous SGD, without losing accuracy.

Mon 24 Feb
Times are displayed in time zone: Tijuana, Baja California change

10:55 - 12:35: Machine Learning/Big Data (Mediterranean Ballroom)Main Conference
Chair(s): Shuaiwen Leon SongUniversity of Sydney
10:55 - 11:20
Talk
Optimizing Batched Winograd Convolution on GPUs
Main Conference
Da YanHong Kong University of Science and Technology, Wei WangHong Kong University of Science and Technology, Xiaowen ChuHong Kong Baptist University
11:20 - 11:45
Talk
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations
Main Conference
Shigang LiDepartment of Computer Science, ETH Zurich, Tal Ben-NunDepartment of Computer Science, ETH Zurich, Salvatore Di GirolamoDepartment of Computer Science, ETH Zurich, Dan AlistarhIST Austria, Torsten HoeflerDepartment of Computer Science, ETH Zurich
11:45 - 12:10
Talk
Scalable Top-K Retrieval with Sparta
Main Conference
Gali SheffiTechnion - Israel, Dmitry BasinYahoo Research, Edward BortnikovYahoo Research, David CarmelAmazon, Idit KeidarTechnion - Israel institute of technology
12:10 - 12:35
Talk
waveSZ: A Hardware-Algorithm Co-Design of Efficient Lossy Compression for Scientific Data
Main Conference
Jiannan TianUniversity of Alabama, Sheng DiArgonne National Laboratory, Chengming ZhangUniversity of Alabama, Xin Liang, Sian JinUniversity of Alabama, Dazhao ChengUniversity of North Carolina at Charlotte, Dingwen TaoUniversity of Alabama, Franck CappelloArgonne National Laboratory