In this paper, we optimize single-precision Winograd convolution, a fast algorithm for convolution, on NVIDIA Volta and Turing GPUs. Compared with the state-of-the-art cuDNN 7.6.1’s Winograd convolution, our implementation achieves up to $2.13\times$ speedup on Volta V100 and up to $2.65\times$ speedup on Turing RTX2070. On both devices, our implementation achieves up to $93%$ of device peak.
Apart from analyzing and benchmarking different high-level optimization options, we also build a SASS assembler TuringAs for Volta and Turing to tune the performance at the native assembly level. We find new performance opportunities not only specific to the Winograd convolution but general for the CUDA compiler and native assembly programming. Those opportunities are only observable at SASS level. We make TuringAs publicly available to inspire more works in this area. To the best of our knowledge, this is the first public assembler for Volta and Turing.
Mon 24 FebDisplayed time zone: Tijuana, Baja California change
10:55 - 12:35
|Optimizing Batched Winograd Convolution on GPUs|
|Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations|
|Scalable Top-K Retrieval with Sparta|
|waveSZ: A Hardware-Algorithm Co-Design of Efficient Lossy Compression for Scientific Data|